Research article Special Issues

Volatility as an Alternative Asset Class: Does It Improve Portfolio Performance?

  • Received: 29 July 2017 Accepted: 20 November 2017 Published: 13 December 2017
  • We investigate the potential role of Exchange Traded Products (Notes) as vehicles to trade volatility (here proxied by the VIX index) as an asset class in a fully optimizing asset allocation framework, subject to long-only constraints. In back-testing, recursive exercises based on an expanding window of data from February 2010 to February 2016, we find evidence that VIX should enter with non-negligible weight most portfolio strategies and that under many circumstances, long VIX positions may generate positive risk-adjusted performance benefits. However, the volatility positions that can be managed and traded through (one of) the most popular US exchange-traded notes (VXX) fails to deliver such realized, out-of-sample benefits under all utility functions and for a range of assumptions on investors' risk aversion. Even though the turnover implied by VXX does not appear excessive, taking into account transaction costs worsens considerably its performance and even casts doubts as to whether volatility ought to be considered as an alternative asset class altogether. Direct strategies that trade appropriate futures on the VIX improve somewhat realized performance, but not enough to tilt over the balance of our conclusions.

    Citation: Elvira Caloiero, Massimo Guidolin. Volatility as an Alternative Asset Class: Does It Improve Portfolio Performance?[J]. Quantitative Finance and Economics, 2017, 1(4): 334-362. doi: 10.3934/QFE.2017.4.334

    Related Papers:

    [1] Ting-Huai Ma, Xin Yu, Huan Rong . A comprehensive transfer news headline generation method based on semantic prototype transduction. Mathematical Biosciences and Engineering, 2023, 20(1): 1195-1228. doi: 10.3934/mbe.2023055
    [2] Wu Zeng, Zheng-ying Xiao . Few-shot learning based on deep learning: A survey. Mathematical Biosciences and Engineering, 2024, 21(1): 679-711. doi: 10.3934/mbe.2024029
    [3] Hao Zhang, Lina Ge, Guifen Zhang, Jingwei Fan, Denghui Li, Chenyang Xu . A two-stage intrusion detection method based on light gradient boosting machine and autoencoder. Mathematical Biosciences and Engineering, 2023, 20(4): 6966-6992. doi: 10.3934/mbe.2023301
    [4] Depei Wang, Lianglun Cheng, Tao Wang . Fairness-aware genetic-algorithm-based few-shot classification. Mathematical Biosciences and Engineering, 2023, 20(2): 3624-3637. doi: 10.3934/mbe.2023169
    [5] Guoli Wang, Pingping Wang, Jinyu Cong, Benzheng Wei . MRChexNet: Multi-modal bridge and relational learning for thoracic disease recognition in chest X-rays. Mathematical Biosciences and Engineering, 2023, 20(12): 21292-21314. doi: 10.3934/mbe.2023942
    [6] Yevgeniy Bodyanskiy, Olha Chala, Natalia Kasatkina, Iryna Pliss . Modified generalized neo-fuzzy system with combined online fast learning in medical diagnostic task for situations of information deficit. Mathematical Biosciences and Engineering, 2022, 19(8): 8003-8018. doi: 10.3934/mbe.2022374
    [7] Jia Yu, Huiling Peng, Guoqiang Wang, Nianfeng Shi . A topical VAEGAN-IHMM approach for automatic story segmentation. Mathematical Biosciences and Engineering, 2024, 21(7): 6608-6630. doi: 10.3934/mbe.2024289
    [8] Hanjiang Wu, Jie Huang, Kehan Wu, António M. Lopes, Liping Chen . Precise tracking control via iterative learning for one-sided Lipschitz Caputo fractional-order systems. Mathematical Biosciences and Engineering, 2024, 21(2): 3095-3109. doi: 10.3934/mbe.2024137
    [9] Peng Wang, Shiyi Zou, Jiajun Liu, Wenjun Ke . Matching biomedical ontologies with GCN-based feature propagation. Mathematical Biosciences and Engineering, 2022, 19(8): 8479-8504. doi: 10.3934/mbe.2022394
    [10] Shi Liu, Kaiyang Li, Yaoying Wang, Tianyou Zhu, Jiwei Li, Zhenyu Chen . Knowledge graph embedding by fusing multimodal content via cross-modal learning. Mathematical Biosciences and Engineering, 2023, 20(8): 14180-14200. doi: 10.3934/mbe.2023634
  • We investigate the potential role of Exchange Traded Products (Notes) as vehicles to trade volatility (here proxied by the VIX index) as an asset class in a fully optimizing asset allocation framework, subject to long-only constraints. In back-testing, recursive exercises based on an expanding window of data from February 2010 to February 2016, we find evidence that VIX should enter with non-negligible weight most portfolio strategies and that under many circumstances, long VIX positions may generate positive risk-adjusted performance benefits. However, the volatility positions that can be managed and traded through (one of) the most popular US exchange-traded notes (VXX) fails to deliver such realized, out-of-sample benefits under all utility functions and for a range of assumptions on investors' risk aversion. Even though the turnover implied by VXX does not appear excessive, taking into account transaction costs worsens considerably its performance and even casts doubts as to whether volatility ought to be considered as an alternative asset class altogether. Direct strategies that trade appropriate futures on the VIX improve somewhat realized performance, but not enough to tilt over the balance of our conclusions.


    Deep learning has significantly succeeded in image recognition, image segmentation and target detection. However, deep learning models often require a large amount of labeled data to train, and labeling the data will take more time. Some scholars propose zero-shot learning. Zero-shot learning uses the model obtained from the seen classes to infer the unseen class of samples, which can largely reduce the labeling of samples [1,2]. Zero-shot learning mimics the process of human cognition of new things. For example, a person knows an animal, like a horse, through pictures and their linguistic descriptions; when knowing about the linguistic description of zebras, zebras look like horses and have black and white stripes on its body, then the person can still recognize zebras by this linguistic description when seeing zebras even though he or she has never seen a zebra [3]. In zero-shot learning, semantic features are needed in addition to using sample features. Semantic features are linguistic descriptions of the samples, such as the color, size and other characteristics. Word vectors extracted by Word2Vec [4] are usually used as semantic features. In zero-shot learning, the model is trained by the seen class samples with semantic features to find the relationship between semantic features and the seen class samples, and then transfers the model to unseen class samples to infer the categories of the unseen class samples.

    There are two categories of zero-shot learning: generalized zero-shot learning and conventional zero-shot learning. The test set contains only the unseen class samples for conventional zero-shot learning. While for generalized zero-shot learning, the test set contains not only the unseen class samples but also the seen class samples. In zero-shot learning, there are many approaches devoted to finding the relationship between the training set samples and the training set semantic features, such as mapping the training set samples to the semantic feature space, mapping the semantic features to the sample space [5], mapping the semantic features and samples to the common space [6,7] and mapping the semantic features and samples to each other's space [8]. However, the training set contains only the seen class samples, and the classes in the training set are not the same as those in the unseen class, which can lead to the inaccurate classification of the unseen class samples when the classification models are used in the test set.

    To address these problems, some scholars use generative models, such as Variational Autoencoder (VAE) [9] and Generative Adversarial Network (GAN) [10], to generate the unseen class pseudo samples, and input them to the classifier for training, which can alleviate the problem of inaccurate classification of samples in the unseen classes. It has been proposed in the literature [11,12] that the use of VAE and GAN leads to the problems of posterior collapse and training instability, and the use of Wasserstein Auto-Encoder (WAE) and Wasserstein Generative Adversarial Network (WGAN) to generate pseudo samples can alleviate this problem. However, the unseen class samples generated by these methods are easily biased to the features of the seen class samples, leading to inaccurate classification results when classifying the real unseen class samples. To address these problems, we propose the following methods:

    1) Different from the above generation models, we use autoencoder to generate the unseen class samples. To make the sample features in the latent space more distinguishable and representative, a classifier is used for the sample features in the latent space.

    2) In order to reduce the unseen class pseudo sample features that are biased to the seen class sample features, we propose new sample features and use them together with unseen class semantic features for cross-reconstruction loss function.

    3) The proposed method is validated for three datasets, AWA1, AWA2 and aPY and have good results.

    The structure of this paper is organized as follows. First, we thoroughly review the related works in Section 2. The proposed method is illustrated in Section 3. In Section 4, we discuss the experiments, and we conclude the paper in Section 5.

    For zero-shot learning, embedding-based zero-shot learning is a common approach. However, embedding-based zero-shot learning can produce domain shift problems and misclassification in unseen class samples [13].

    There are three solutions to alleviate the unseen class samples that are easily misclassified into the seen classes in zero-shot learning: calibrated stacking, generative models and detection of the unseen class samples. Calibrated stacking [14] added a calibrated term to the classifier so that the score of the seen class is reduced during classification, and the score of the unseen class samples can be increased. The calibrated stacking equation is as follows:

    ˆy=argmaxcTfc(x)γI[cS]

    where γ is the calibrated factor and the indicator function. I[] indicates whether c belongs to a seen class, if c is a seen class, the value of the indicator function is 1, otherwise the value of the indicator function is 0. The classifier of APN [15] used class embedding and added calibrated stacking in the classifier to alleviate misclassification of the unseen class samples.

    The method of generative models is to use generative models to generate pseudo samples substitute real unseen class samples [16]. The training set and pseudo samples are used in training the classifier, so that the unseen class samples can avoid bias to the seen classes. Multi-modal Feature Fusion algorithm (MFF) [17] used visual principal component features to compensate for the lack of descriptive information using only semantic features, and then combined GAN and VAE to generate high-quality pseudo-samples. Cross- and Distribution Aligned VAE (CADA-VAE) [18] was the VAE method. The latent space distributional alignment and cross-alignment were used to ensure the alignment between the two different modalities of sample features and semantic features. To make the generated samples close to the real samples, Over-Complete Distribution using Conditional Variational Autoencoder (OCD-CVAE) [19] used over-complete distribution to generate pseudo samples. Chen et al. [12] proposed to use WAE to generate pseudo samples and used an aggregated posterior distribution in the latent space to align the manifold structure of the sample features and the semantic features. f-CLSWGAN [20] used WGAN to generate pseudo samples and used a classifier to make the generated pseudo samples more discriminative. Based on f-CLSWGAN, Adaptive Bias-Aware GAN (ABA-GAN) [21] proposed adaptive adversarial loss and domain loss functions to make the generated pseudo samples more meaningful and to distinguish the seen classes from the unseen classes. Li et al. [11] used WGAN to generate pseudo samples and used multimodal cyclic loss function and bi-directional autoencoder. In response to the fact that GAN is not easy to train, and the pseudo samples generated by VAE are of low quality, Dual VAEGAN [22] used a combination of GAN and VAE.

    Detection of samples of the unseen class. This method first distinguishes whether the samples belong to the seen classes or the unseen classes and then classifies the samples into specific class. GatingAE [3] first used the latent space and the cross-reconstruction space to detect samples belonging to the unseen class, and then used a linear classifier to classify the samples in the seen classes and a nearest neighbor classifier for the samples belonging to the unseen classes. Chen et al. [23] proposed determining whether a sample belongs to the seen class or the unseen class by calculating the cosine similarity between the latent space features of the samples and the mean value of each class. However, the models in these methods are obtained by training the samples from the seen classes; when migrating to the samples from the unseen classes, the classification results of the samples from the unseen classes are still biased to the seen classes.

    Cao et al. [24] achieved recognition of zero shot traffic signs using autoencoder. Different from the literature [24], we use autoencoder to generate the unseen samples to alleviate the misclassification of the unseen class samples. To prevent the generated unseen class samples biased towards the features of seen class samples and improve the classification accuracy of the unseen class samples, we add the information of both unseen class semantic features and the proposed sample features.

    In zero-shot learning, the training set can be denoted as S={XS,AS,YS}, and the unseen class can be denoted as U={XU,AU,YU}, where X denotes sample features, A denotes semantic features and Y denotes labels. For conventional zero-shot learning, the class of XU is predicted by the classifier: XUYU; for generalized zero-shot learning, the class of X is predicted by the classifier: XYSYU.

    In this study, we use autoencoder to generate the pseudo samples of unseen classes, and the model is shown in Figure 1. In the Figure 1, E1 and E2 represent the encoder, D1 and D2 represent the decoder. The sample features and semantic features are encoded to obtain the same dimensional latent space features.

    Figure 1.  The model of the proposed method.

    According to the autoencoder, for the training set, the generated sample features ~XS and the semantic features of the seen classes ~AS need to approximate the input features XS and AS.Assuming that there are m samples, the reconstruction loss function can be written as:

    Lrecon1=1mmi=1|xsi~xsi|+1mmi=1|asi~asi| (1)

    We use the lowercase xsi, ~xsi, asi and asi to denote one sample feature in XS, one generate sample feature in ~XS, one semantic feature in AS and one generated semantic feature in ~AS respectively. We want these two modality features to be aligned in the latent space. We use ZS to represent the sample features of the latent space and ZAS to represent the seen class semantic features of the latent space.

    Llatentrecon=1mmi=1|zSizASi| (2)

    In Eq (2), we use the lowercase zSi to denote one sample feature in ZS, and use the zASi to denote one seen class semantic feature in ZAS. In addition to the reconstruction loss function shown in Eq (1), zero-shot learning contains two different modalities, sample features and semantic features. Aligning different modalities can reduce the domain shift problem [25]. Inspired by GatingAE [3], Chen et al. [12], CADA-VAE [18] and Discriminative Cross-Aligned Variational Autoencoder(DCA-VAE) [25], we use the cross-reconstruction loss function. The features ¯XS and ¯AS are obtained by passing the semantic features and sample features of the latent space through D2 and D1 decoders, respectively, the cross-reconstruction loss function is as follows:

    Lcrossrecon1=1mmi=1|xsi¯xsi|+1mmi=1|asi¯asi| (3)

    Here, ¯xsi denotes one feature in ¯XS, and ¯asi denotes one feature in ¯AS. Although we can use Eqs (1), (2) and (3) to train the model and then to generate samples of the unseen classes, Eqs (1), (2) and (3) only contains samples of the seen classes and semantic features, which will lead to the pseudo samples being biased to the seen classes. To address this problem, we add the unseen class semantic features AUS to the model and propose new sample features ˆX.

    The unseen class semantic features AUS can be obtained by the following method. The sample features of the training set are mapped to the semantic feature space using the following equation:

    minW||XSWTAS||2F+α||W||2F (4)

    ||||F in Eq (4) denotes Frobenius norm. The mapping matrix W is obtained as follows:

    W=XTSAS(ATSAS+αI)1 (5)

    where I in Eq (5) represents the unit matrix and α denotes an adjustable parameter. The sample features in the training set are then mapped to the semantic feature space through the mapping matrix W and find the nearest unseen class semantic features, which constitute AUS.

    After obtaining AUS, we input AUS to the autoencoder to obtain the generated unseen class semantic features ~AUS, and the reconstruction loss between ~AUS and AUS is:

    Lrecon2=1mmi=1|ausi~ausi| (6)

    Here, we use ausi to denote one unseen class semantic feature in AUS and ~ausi to denote one generated unseen class semantic feature in ~AUS. Except the reconstruction loss for AUS. We also want to align different modalities between the unseen class semantic features and sample features, but there is a lack of unseen class samples in the training set. In this paper, we take the following approach to get the cross-reconstruction loss function: Find the difference between the unseen class semantic features and the seen class sample features in the latent space, the difference represents the relationship between the unseen class semantic features and the seen class sample features in the latent space. Then, pass the difference through the decoder D1 to get θ. We use ZS and ZAU to represent the latent features of the seen class sample features and the latent features of the unseen class semantic features. The formula is as follows:

    θ=D1(ZSZAU) (7)

    Then subtract θ from the sample features of the training set to obtain the feature ˆX:

    ˆX=XSθ (8)

    The cross-reconstruction loss function for unseen class semantic features can be written as:

    Lcrossrecon2=1mmi=1|^xi¯xusi|+β1mmi=1|ausi¯asi| (9)

    β in the above equation is an adjustable parameter. ^xi denotes one feature in ˆX, ¯xusi denotes one feature obtained by passing one unseen class semantic feature through the decoder D1. The reason for using the feature ˆX instead of XS is that ˆX can reduce the information of the seen class samples in the loss function, which can alleviate the similarity between the unseen class pseudo samples and the seen class samples.

    To better find the relationship between the semantic features of unseen classes and the training set samples in Eq (7), and also make the sample features in the latent space distinguishable and representative, the sample features of the latent space are classified using the cross-entropy loss function:

    Lclassifier=mi=1ysilog~ysi (10)

    ~ysi in Eq (10) is the predicted label and ysi is the true label of the sample features of the latent space.

    Combining Eqs (1), (2), (3), (6), (9) and (10), the objective function is:

    L=Lrecon1+Llatentrecon+Lrecon2+Lcrossrecon1+Lcrossrecon2+Lclassifier (11)

    After the model is trained according to Eq (11), the samples are generated with the sample features XS and semantic features AUS. For generalized zero-shot classification, all the generated seen class samples and unseen class samples need to be input to the classifier for training. For conventional zero-shot classification, only the generated unseen class samples need to be input to the classifier for training.

    Three datasets, AWA1, AWA2 and aPY, are used in our study.

    1) AWA1 [26]: The seen class contains 40 categories, and the unseen class contains 10 categories. The number of samples in the seen class is 19832, the number of samples in the unseen class is 5685 and the dimension of the semantic features is 85.

    2) AWA2 [27]: The seen class contains 40 categories, and the unseen class contains 10 categories. The number of samples in the seen class is 23527, the number of samples in the unseen class is 7913 and the dimension of the semantic features is 85.

    3) aPY [28]: The seen class contains 20 categories, and the unseen class contains 12 categories. The number of samples in the seen class is 5932, the number of samples in the unseen class is 7924, and the dimension of the semantic features is 64.

    The sample features and semantic features used in our study are taken from the literature [27]. Following the literature [12], the input dimension of encoder E1 is 2048 dimensions, the output of the first layer is 512 dimensions, and the dimension of the latent space is 128; the dimension of the output of the first layer of encoder E2 is 128.The dimension of the output of the first layer of decoder D1 is 256 and the dimension of output is 2048; the dimension of output of the first layer of decoder D2 is 256. We use the Adam algorithm for optimization, the learning rate is 0.001 and the batch size is 256.

    We use the evaluation criteria proposed in the literature [27]. For the conventional zero-shot classification, only the accuracy of classification needs to be calculated:

    acc=1|C||C|i#correct predictions in isamples in i

    For generalized zero-shot classification, not only the classification accuracy of the seen class and the unseen class should be calculated, but also the harmonic mean. Assuming that the classification accuracy of the samples of seen classes is denoted as acctr and the classification accuracy of the samples of unseen classes is denoted as accts, the harmonic mean can be written as:

    H=2×acctr×acctsacctr+accts

    The generalized zero-shot classification results and the conventional zero-shot classification results are shown in Tables 1 and 2, where the results of Semantic Autoencoder (SAE) [8], Direct Attribute Prediction (DAP) [26], Indirect Attribute Prediction (IAP) [26] and Structured Joint Embedding (SJE) [29] are from the literature [27]. In Table 1, "ts" represents the classification results of unseen classes and "tr" represents the classification results of the seen classes. From Table 1, the proposed method is 1% less than CADA-VAE [18] for the AWA1 dataset. For the AWA2 dataset, the proposed method is 0.5% better than Chen et al. [23]. For the aPY dataset, the proposed method is 3.1% higher than DAP [26], while it is 4.9% higher than the generative model Chen et al. [12]. The accuracy of the proposed method on unseen class is higher than other methods.

    Table 1.  The results of generalized zero-shot learning.
    AWA1 AWA2 aPY
    ts tr H ts tr H ts tr H
    SAE [8] 1.8 77.1 3.5 1.1 82.2 2.2 0.4 80.9 0.9
    DAP [26] 46.5 68.5 55.4 43.7 70.2 53.3 27.6 55.8 37.0
    IAP [26] 2.1 78.2 4.1 0.9 87.6 1.8 5.7 65.6 10.4
    SJE [29] 11.3 74.6 19.6 8.0 73.9 14.4 3.7 55.7 6.9
    Preserving Semantic Relations (PSR) [30] 20.7 73.8 32.3 13.5 51.4 21.4
    f-CLSWGAN [20] 57.9 61.4 59.6
    Zhang et al. [31] 20.7 67.9 38.6 16.1 66.9 25.9
    Li et al. [11] 54.9 71.7 62.2
    CADA-VAE [18] 57.3 72.8 64.1 55.8 75.0 63.9
    Chen et al. [23] 54.7 72.7 62.4 55.6 76.9 64.2
    Chen et al. [12] 54.5 72.8 62.3 55.2 73.5 63.0 26.7 51.5 35.2
    The proposed method 62.4 63.9 63.1 60.6 69.5 64.7 31.5 55.3 40.1

     | Show Table
    DownLoad: CSV

    Table 2 shows the conventional zero-shot classification results. For the AWA1 dataset, the proposed method is slightly lower than f-CLSWGAN [20] and Li et al. [11], which used the GAN model. For the aPY dataset, the method in this paper is slightly lower than the method of Zhang et al. [31] and more accurate than the other methods. The accuracy of the method in this paper is higher than the other methods on the AWA2 dataset.

    Table 2.  The results of conventional zero-shot learning.
    AWA1 AWA2 aPY
    SAE [8] 53.0 54.1 8.3
    DAP [26] 44.1 46.1 33.8
    IAP [26] 35.9 35.9 36.6
    SJE [29] 65.6 61.9 32.9
    PSR [30] 63.8 38.4
    Cross-Class Sample Synthesis (CCSS) [32] 56.3 63.7 35.5
    f-CLSWGAN [20] 69.9
    Zhang et al. [31] 68.8 41.3
    Li et al. [11] 69.9
    CADA-VAE [18] 58.8 60.3
    Chen et al. [12] 65.2 65.5 32.7
    The proposed method 67.1 66.1 39.8

     | Show Table
    DownLoad: CSV

    The parameters involved in the model are α, β and the dimensionality of the latent space, where we denote the dimensionality of the latent space as d. The effects of taking different values of α, β and d on the generalized zero-shot classification and the conventional zero-shot classification are shown in Figures 2, 3 and 4.

    Figure 2.  The effects of α on the results of zero-shot classification.
    Figure 3.  The effects of β on the results of zero-shot classification.
    Figure 4.  The effects of d on the results of zero-shot classification.

    Figure 2 shows the effects of the parameter α on the zero-shot classification results. The parameter α is used to prevent overfitting. Taking the values of α as 0.1, 1, 10 and 100. It can be seen from Figure 2 that the classification results of the aPY dataset are decreasing and then increasing as α keeps increasing. The results of the AWA2 dataset on the conventional zero-shot classification are increasing all the time, while the values of the generalized zero-shot classification are decreasing and then increasing. The results of AWA1 dataset on the conventional zero-shot classification is decreasing and then increasing, and the harmonic mean is always increasing.

    The values of β are taken as 0.001, 0.01, 0.1 and 1. β is used to regulate the relationship between the training set samples and the generated unseen class semantic features, and the value of β is taken small because the training set samples are not the real unseen class samples. From Figure 3, the accuracy of conventional zero-shot classification on the aPY dataset is almost unaffected by the value of β, but the value in the generalized zero-shot classification decreases with increasing of β. The classification accuracy of AWA1 and AWA2 on conventional zero-shot classification is also almost unaffected by the value of β, but the harmonic mean value increases and then decreases with increasing of β.

    Figure 4 shows the effects of dimension d on the zero-shot classification results, with d taking values of 64,128 and 256. For the aPY dataset, the accuracy of conventional zero-shot classification increases first and then decreases as d increases, the harmonic mean value keeps decreasing. For the AWA1 dataset, the results increase and then decrease with increasing d, except for the classification results of the seen classes. For the AWA2 dataset, most of the zero-shot classification results show a trend of increasing and then decreasing with increasing d.

    Figure 5 shows the tSNE for generalized zero-shot classification of the aPY dataset, where (a) and (b) denote the training set samples and unseen class samples, respectively, and (c) and (d) are the generated training set samples and unseen class samples.

    Figure 5.  tSNE of aPY dataset.

    For the training set samples, the distribution between the generated samples and the original samples is almost the same, and the generated samples are more dispersed between different categories and more concentrated within classes than the original samples. For the unseen class samples, there are more samples presenting orange color in the original samples, while in the generated samples, since AUS is chosen for generating the unseen class samples in this paper, it will lead to the number of some classes will be more and the number of some other classes will be less in the generated samples. Except for the inconsistent number of samples, most of the generated samples are similar to the distribution of the real samples.

    The ablation experiments are divided into the following cases: a. Only Eq (1) is retained as the loss function of the model. b. Add Llatentrecon to Eq(1). c. Lcrossrecon10 on the basis of b. d. Lclassifier0 on the basis of c; e. Based on d, the second term in Lcrossrecon2 is not 0; f. Based on e, Lrecon20. The proposed method is to add the first term in Lcrossrecon2, on the basis of f. The harmonic mean H and the accuracy acc of the conventional zero-shot classification are shown in Table 3.

    Table 3.  The results of ablation experiments.
    AWA1 AWA2 aPY
    acc H acc H acc H
    a 45.1 30.0 45.1 17.9 30.7 22.0
    b 52.6 32.3 59.4 38.4 38.8 25.0
    c 56.5 33.3 57.6 23.4 35.2 26.3
    d 59.7 48.3 59.7 40.2 36.8 34.4
    e 61.1 51.4 60.2 43.8 36.9 35.1
    f 61.1 49.8 61.0 45.7 37.0 36.4
    The proposed method 67.1 63.1 66.1 63.1 39.8 40.1

     | Show Table
    DownLoad: CSV

    As can be seen in Table 3, most of the classification results are increased as the term increased in the loss function. However, for the AWA1 dataset when changing from e to f, the accuracy does not change for the conventional zero-shot classification, and the harmonic mean decreases slightly. The proposed method does not increase particularly much in the conventional zero-shot classification compared to other methods, especially in the aPY dataset, and for aPY dataset, the method b is larger than other methods except the proposed method. For AWA2 when changing from b to c, the results decreases, especially in harmonic mean. The seen class information increased when we add the Lcrossrecon1 and the accuracy of the unseen classes decreases. However, for generalized zero-shot classification, the proposed method can provide some information about the unseen classes when training the model, and reduce the similarity between the generated unseen class samples and the seen class samples.

    By replacing ˆX in the loss function Lcrossrecon2 with XS, the results are shown in Table 4 numbered as (1), and the results of the proposed model numbered as (2). From Table 4, when XS is used instead of ˆX, the results of zero-shot classification are all decreased, especially for generalized zero-shot classification. This is because the loss function contains more information about the samples of the seen classes, making the results easily biased to the seen classes.

    Table 4.  Comparison between ˆX and XS.
    AWA1 AWA2 aPY
    acc ts tr H acc ts tr H acc ts tr H
    (1) 55.1 37.0 71.8 48.8 55.6 34.8 73.0 47.1 35.8 27.0 52.1 35.6
    (2) 67.1 62.3 63.9 63.1 66.1 60.6 69.5 64.7 39.8 31.5 55.3 40.1

     | Show Table
    DownLoad: CSV

    In this study, an autoencoder approach is used for generating samples of unseen classes in zero-shot learning. For the problem that the generated unseen class sample features are always biased to the seen class features, we add the semantic features of the unseen class with the proposed new sample features to the cross-reconstruction loss function. This can reduce the information of the seen class samples and make the generated unseen class samples closer to the real unseen class samples, and improve the classification accuracy of the unseen class samples. The experimental results on three datasets verify that the proposed method can achieve good results.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare there is no conflict of interest.

    [1] Adams Z, Füss R, Glück T (2017) Are correlations constant? Empirical and theoretical results on popular correlation models in finance. J Bank Financ 84: 9–24.
    [2] Alexander C, Kapraun J, Korovilas D (2015) Trading and investing in volatility products. Financ Mark, Inst Instrum 24: 313–347. doi: 10.1111/fmii.12032
    [3] Alexander C, Korovilas D (2013) Volatility exchange-traded notes: curse or cure? J Altern Invest 12: 52–70.
    [4] Amenc N, Glotz F, Grigoriu A (2010) Risk Control through Dynamic Core-Satellite Portfolios of ETFs: Applications to Absolute Return Funds and Tactical Asset Allocation. EDHEC-Risk Institute Publication.
    [5] Bhansali V (2008) Tail risk management. J Portf Manage 34: 68–75. doi: 10.3905/jpm.2008.709982
    [6] Bekaert G, Wu G (2000) Asymmetric Volatility and Risk in Equity Markets. Rev Financ Stud 13: 1–42. doi: 10.1093/rfs/13.1.1
    [7] Bollen N, O'Neill MJ, Whaley R (2017) Tail Wags Dog: Intraday Price Discovery in VIX Markets. J Futures Mark 37: 431–451. doi: 10.1002/fut.21805
    [8] Black K (2006) Improving Hedge Fund Risk Exposures by Hedging Equity Market Volatility, or How the VIX Ate My Kurtosis. J Trading 1: 6–15.
    [9] Brenner M, Ou E, Zhang J (2006) Hedging Volatility Risk. J Bank Financ 30: 811–821. doi: 10.1016/j.jbankfin.2005.07.015
    [10] Brière M, Burgues A, Signori O (2009) Volatility Exposure for Strategic Asset Allocation. J Portf Manage 36: 105–116.
    [11] Carroll R, Conlon T, Cotter J, et al. (2017) Asset allocation with correlation: A composite trade-off. European J Oper Res 262: 1164–1180. doi: 10.1016/j.ejor.2017.04.015
    [12] Chicago Board Options Exchange (2003) VIX: CBOE Volatility Index. Working Paper.
    [13] Dash S, Liu B (2012) Volatility ETFs and ETNs. J Trading 7: 43–48.
    [14] Dash S, Moran MT (2005) VIX as a Companion for Hedge Fund Portfolios. J Altern Invest 8: 75–80. doi: 10.3905/jai.2005.608034
    [15] DeLisle J, Doran JS, Krieger K (2010) Volatility as an Asset Class: Holding VIX in a Portfolio. Working Paper.
    [16] DeMiguel V, Garlappi L, Uppal R (2007) Optimal Versus Naive Diversification: How Inefficient is the 1/N Portfolio Strategy? Rev Financ Stud 22: 1915–1953.
    [17] Doran J, Krieger K (2010) Implications for Asset Returns in the Implied Volatility Skew. Financ Anal J 66: 65–76. doi: 10.2469/faj.v66.n1.9
    [18] Fallon W, Park J, Yun D (2015) Asset Allocation Implications of the Global Volatility Premium. Financ Anal J 71: 38–56. doi: 10.2469/faj.v71.n5.4
    [19] Füss R, Grabellus M, Mager F, et al. (2014) How risk-return efficient are target risk strategies? J Index Invest 4: 33–42. doi: 10.3905/jii.2014.4.4.033
    [20] Grant M, Gregory K, Lui J (2007) Volatility as an Asset. Goldman Sachs Equity Research.
    [21] Guidolin M (2013) Preference Models in Portfolio Construction and Evaluation, in K., Baker and G., Filbeck (eds.), Portf Theory Manage, chapter 11, Oxford University Press.
    [22] Guidolin M, Timmermann A (2005) Optimal Portfolio Choice under Regime Switching, Skew and Kurtosis Preferences. Federal Reserve Bank of St. Louis W.P. No. 2005-006A.
    [23] Guidolin M, Timmermann A (2008) International Asset Allocation under Regime Switching, Skew and Kurtosis Preferences. Rev Financ Stud 21: 889–935. doi: 10.1093/rfs/hhn006
    [24] Hancock G (2013) VIX futures ETNs: three dimensional losers. Account Financ Res 2: 53–64.
    [25] Jondeau E, Rockinger M (2006) Optimal Portfolio Allocation under Higher Moments. European Financ Manage 12: 29–55.
    [26] Hafner R, Wallmeier M (2008) Optimal Investments in Volatility. Financ Mark Portf Manage 22: 147–167.
    [27] Hafner R, Wallmeier M (2007) Volatility as an Asset Class: European Evidence. European J Financ 13: 621–644. doi: 10.1080/13518470701380142
    [28] Jondeau E, Rockinger M (2006) Optimal Portfolio Allocation under Higher Moments. European Financ Manage 12: 29–55. doi: 10.1111/j.1354-7798.2006.00309.x
    [29] Kuenzi D, Xu S (2007) Asset Based Style Analysis for Equity Strategies: The Role of the Volatility Factor. J Altern Invest 10: 10–24.
    [30] Rakowski D, Shirley SE, Stark JR (2017) Tail-risk hedging, dividend chasing, and investment constraints: The use of exchange-traded notes by mutual funds. J Empir Financ 44: 91–107. doi: 10.1016/j.jempfin.2017.08.003
    [31] Scott R, Horvath P (1980) On the Direction of Preference for Moments of Higher Order than the Variance. J Financ 35: 915–919.
    [32] Sharpe W (2006) Expected Utility Asset Allocation. Financ Anal J 63: 12–39.
    [33] Stanton C (2011) Volatility as an Asset Class. J Invest Consult 12: 23–30.
    [34] Szado E (2009) VIX Futures and Options: A Case Study of Portfolio Diversification during the 2008 Financial Crisis. J Altern Invest 12: 68–95. doi: 10.3905/JAI.2009.12.2.068
    [35] Whaley R (2000) The Investor Fear Gauge. J Portf Manage 26: 12–17.
    [36] Whaley R (2009) Understanding the VIX. J Portf Manage 35: 98–105. doi: 10.3905/JPM.2009.35.3.098
    [37] Whaley R (2013) Trading Volatility: At What Cost? J Portf Manage 40: 95–108.
  • This article has been cited by:

    1. Tianshu Wei, Jinjie Huang, A Dual Discriminator Method for Generalized Zero-Shot Learning, 2024, 79, 1546-2226, 1599, 10.32604/cmc.2024.048098
  • Reader Comments
  • © 2017 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5537) PDF downloads(1176) Cited by(3)

Figures and Tables

Figures(4)  /  Tables(9)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog