Loading [Contrib]/a11y/accessibility-menu.js
Research article Topical Sections

Low-complexity energy disaggregation using appliance load modelling

  • Large-scale smart metering deployments and energy saving targets across the world have ignited renewed interest in residential non-intrusive appliance load monitoring (NALM), that is, disaggregating total household’s energy consumption down to individual appliances, using purely analytical tools. Despite increased research efforts, NALM techniques that can disaggregate power loads at low sampling rates are still not accurate and/or practical enough, requiring substantial customer input and long training periods. In this paper, we address these challenges via a practical low-complexity lowrate NALM, by proposing two approaches based on a combination of the following machine learning techniques: k-means clustering and Support Vector Machine, exploiting their strengths and addressing their individual weaknesses. The first proposed supervised approach is a low-complexity method that requires very short training period and is fairly accurate even in the presence of labelling errors. The second approach relies on a database of appliance signatures that we designed using publicly available datasets. The database compactly represents over 200 appliances using statistical modelling of measured active power. Experimental results on three datasets from US, Italy, Austria and UK, demonstrate the reliability and practicality.

    Citation: Hana Altrabalsi, Vladimir Stankovic, Jing Liao, Lina Stankovic. Low-complexity energy disaggregation using appliance load modelling[J]. AIMS Energy, 2016, 4(1): 1-21. doi: 10.3934/energy.2016.1.1

    Related Papers:

    [1] Alma Mašić, Hermann J. Eberl . On optimization of substrate removal in a bioreactor with wall attached and suspended bacteria. Mathematical Biosciences and Engineering, 2014, 11(5): 1139-1166. doi: 10.3934/mbe.2014.11.1139
    [2] Maryam Basiri, Frithjof Lutscher, Abbas Moameni . Traveling waves in a free boundary problem for the spread of ecosystem engineers. Mathematical Biosciences and Engineering, 2025, 22(1): 152-184. doi: 10.3934/mbe.2025008
    [3] Fadoua El Moustaid, Amina Eladdadi, Lafras Uys . Modeling bacterial attachment to surfaces as an early stage of biofilm development. Mathematical Biosciences and Engineering, 2013, 10(3): 821-842. doi: 10.3934/mbe.2013.10.821
    [4] Ana I. Muñoz, José Ignacio Tello . Mathematical analysis and numerical simulation of a model of morphogenesis. Mathematical Biosciences and Engineering, 2011, 8(4): 1035-1059. doi: 10.3934/mbe.2011.8.1035
    [5] Fabiana Russo, Alberto Tenore, Maria Rosaria Mattei, Luigi Frunzo . Multiscale modelling of the start-up process of anammox-based granular reactors. Mathematical Biosciences and Engineering, 2022, 19(10): 10374-10406. doi: 10.3934/mbe.2022486
    [6] Peter W. Bates, Jianing Chen, Mingji Zhang . Dynamics of ionic flows via Poisson-Nernst-Planck systems with local hard-sphere potentials: Competition between cations. Mathematical Biosciences and Engineering, 2020, 17(4): 3736-3766. doi: 10.3934/mbe.2020210
    [7] Chiu-Yen Kao, Yuan Lou, Eiji Yanagida . Principal eigenvalue for an elliptic problem with indefinite weight on cylindrical domains. Mathematical Biosciences and Engineering, 2008, 5(2): 315-335. doi: 10.3934/mbe.2008.5.315
    [8] Nikodem J. Poplawski, Abbas Shirinifard, Maciej Swat, James A. Glazier . Simulation of single-species bacterial-biofilm growth using the Glazier-Graner-Hogeweg model and the CompuCell3D modeling environment. Mathematical Biosciences and Engineering, 2008, 5(2): 355-388. doi: 10.3934/mbe.2008.5.355
    [9] Min Zhu, Xiaofei Guo, Zhigui Lin . The risk index for an SIR epidemic model and spatial spreading of the infectious disease. Mathematical Biosciences and Engineering, 2017, 14(5&6): 1565-1583. doi: 10.3934/mbe.2017081
    [10] Blaise Faugeras, Olivier Maury . An advection-diffusion-reaction size-structured fish population dynamics model combined with a statistical parameter estimation procedure: Application to the Indian Ocean skipjack tuna fishery. Mathematical Biosciences and Engineering, 2005, 2(4): 719-741. doi: 10.3934/mbe.2005.2.719
  • Large-scale smart metering deployments and energy saving targets across the world have ignited renewed interest in residential non-intrusive appliance load monitoring (NALM), that is, disaggregating total household’s energy consumption down to individual appliances, using purely analytical tools. Despite increased research efforts, NALM techniques that can disaggregate power loads at low sampling rates are still not accurate and/or practical enough, requiring substantial customer input and long training periods. In this paper, we address these challenges via a practical low-complexity lowrate NALM, by proposing two approaches based on a combination of the following machine learning techniques: k-means clustering and Support Vector Machine, exploiting their strengths and addressing their individual weaknesses. The first proposed supervised approach is a low-complexity method that requires very short training period and is fairly accurate even in the presence of labelling errors. The second approach relies on a database of appliance signatures that we designed using publicly available datasets. The database compactly represents over 200 appliances using statistical modelling of measured active power. Experimental results on three datasets from US, Italy, Austria and UK, demonstrate the reliability and practicality.


    The electronic medical records (EMRs), sometimes called electronic health records (EHRs) or electronic patient records (EPRs), is one of the most important types of clinical data and often contains valuable and detailed patient information for many clinical applications. This paper studies the technology of structuring EMRs and medical information extraction, which are key foundations for health-related various applications.

    As a kind of medical information extraction technology, the medical assertion classification (MAC) in EMRs, which is formally defined for the 2010 i2b2/VA Challenge, aims to recognize the relationship between medical entities (Disease and Symptom) and patients. Given a medical problem or entity mentioned in a clinical text, an assertion classifier must look at the context and choose the status of how the medical problem pertains to the patient by assigning one of seven labels: present, absent, conditional, possible, family, occasional, or history. The assertion is reflected in two aspects: whether the entity occurs to the patient, and how the entity occurs to the patient. As the basis of medical information processing, assertion classification in EMRs is of great importance to many EMRs mining tasks. When there are many researches about MAC for English EMRs, few studies have been done on Chinese texts.

    Based on the above, we study the MAC methods for Chinese EMRs in this paper. According to the corresponding task of the 2010 i2b2/VA Challenge, we divide the assertion categories of medical entity into seven categories: present (当前的), possible (可能的), conditional (有条件的), family (非患者的), occasional (偶有的), absent (否认的) and history (既往的). The definitions and Chinese sentence examples of different kind of assertion category are shown in the Table 1.

    Table 1.  Definition of assertion categories.
    Category Definition Example sentence
    Present Symptoms or illness that must be present in the patient 患者意识昏迷8小时(The patient was unconscious for 8 hours.)
    Absent A denial of illness or symptoms 患者无眩晕头痛(No vertigo, no headache)
    Conditional A condition or illness that occurs only under certain conditions 饮酒后易休克(Shock after drinking)
    Possible Possible disease or symptom 术后可能有红肿现象(Postoperative redness may occur)
    肿瘤待查(Tumor waiting for investigation)
    Family A condition or condition that is not the patient's own 直系亲属患有癫痫病史(History of epilepsy in immediate relatives)
    Occasional An illness or symptom that does not currently occur frequently 偶有头晕症状(Occasionally dizziness)
    History Past illnesses or symptoms 2年前因痛风入我院治疗(Gout was admitted to our hospital two years ago for treatment)

     | Show Table
    DownLoad: CSV

    Although there have been lot of methods proposed for recognizing entity assertion category from EMRs, most of them used traditional machine methods such as Support Vector Machine (SVM) and Conditional Random Field (CRF), which are mainly based on feature engineering. However, feature engineering is relatively time-consuming and costly, and resulting feature sets are both domain and model-specific. While deep neural network approaches on various medical information extraction tasks have achieved better performance compared to traditional machine learning models, research on entity assertion classification of EMRs using deep neural network model is still few.

    Therefore, this paper proposes a novel model for MAC of Chinese EMRs. We build a deep network (called GRU-Softmax) as baseline, which combines Gated Recurrent Unit (GRU) neural network (a type of Recurrent Neural Networks, RNNs) and softmax, to classify named entity assertion from Chinese EMR. Compared with RNN, the advantage of GRU-Softmax lie in that GRU neural networks have strong expressive ability to capture long context without time-intensive feature engineering.

    Furthermore, in order to obtain character level characteristics in EMRs text, we train Chinese character-level embedding representation using Convolutional Neural Network (CNN), and combine them with word-level embedding vector acquired from large-scale background training corpus. Then the combined vectors are sent to GRU architecture to train entity assertion classification model. In addition, to enhance the representation and distinguish ability of characters and their contexts, we integrate the medical knowledge attention (MKA) learned from entity names and their definition or descriptions in medical dictionary bases (MDBs) in the model.

    On the whole, the contributions of this work can be summarized as follow: (1) we introduce a deep neural network that combines GRU neural networks and Sfotmax to classify medical assertion at first time; (2) We compared the influence of character-level representation extracted by CNN on the model; (3) we use medical knowledge attention (MKA) to integrate entity representation from external knowledge (medical dictionary bases, MDBs).

    The remainder of this paper is composed as follows. In section 2 we summarize the related work about MAC. In section 3 we present our attention-based CNN-GRU-Softmax network model for MAC in Chinese EMRs. In section 4 we show the experimental results and give some analysis. Finally, we summarize our work and outline some ideas for future research.

    Research of entity assertion classification is to study the relation classification between entity and patient on the basis of entity known. Chapman et al. [1] proposed a classification model named as NegEx based on regular expression rules, which classifies disease entity as "existing" or "nonexistent", and can obtain 85.3% of F value on more than 1000 disease entities. Based on NegEx method and combining regular expression rules and trigger words, Harkema et al. [2] proposed the ConText method to classify the disease entities into one of six categories. On six different types of medical records, 76% to 93% of F values could be obtained, indicating that the distribution of modified disease entities varied greatly in different styles of texts.

    Based on the evaluation data of the 2010 i2b2 Challenge, researchers proposed many classification methods based on rules, SVM, CRF, etc. The most effective concept extraction systems used support vector machines (SVMs) [3,4,5,6,7,8,9,10,11], either with contextual information and dictionaries that indicate negation, uncertainty, and family history [6,10], or with the output of rule-based systems [3,6,8]. Roberts et al. [4] and Chang et al. [11] utilized both medical dictionary and rules. Chang et al. complemented SVM with logistic regression, multi-logistic regression, and boosting, which they combined using voting mechanism. The highest classification effect in the evaluation was obtained by the Bi-level classifier proposed by de Bruijn et al. [5], who used the cTAKES knowledge base created an ensemble whose final output was determined by a multi-class SVM, and the evaluation result F could reach 93.6%. Clark et al. [12] used a CRF model to determine negation and uncertainty with their scope, and added sets of rules to separate documents into different zones, to identify and scope cue phrases, and determine phrase status. They combined the results from the found cues and the phrase status module with a maximum entropy classifier that also used concept and contextual features.

    In this paper, we propose a neural network architecture combining GRU-CNN-Softmax network with Medical Knowledge Attention that will learn the shared semantics between medical record texts and the mentioned entities in the medical dictionary bases (MDBs). The architecture of our proposed model is shown in Figure 1. After querying pretrained character embedding tables, the input sentence will be transformed respectively to the corresponding sequences of pretrained character embeddings and random generated character embedding matrixes for every character. Then a CNN is used to form the character level representation and a GRU is used to encode the sentence representation after concatenating the pretrained character embeddings and character-level representation of the sentence. Afterwards, we treat the entity information from MKBs as a query guidance and integrate them with the original sentence representation using a multi-modal fusion gate and a filtering gate. At last, a Softmax layer is used to classify.

    Figure 1.  The framework of our model. The right part is the GMF and Filtering Gate.

    As described in Figure 2, we firstly train Chinese character embeddings from a large unlabeled Chinese EMR corpus, then CNN is used to generate sentence character-level representation from the character embedding matrix sequence to alleviate rare character problems and capture helpful morphological information like special characters in EMRs. Since the length of sentences is not consistent, a placeholder (padding) is added to the left and right side of character embeddings matrix to make the length of every sentence character-level representation vector matrix sequence equal.

    Figure 2.  Character-level representation of a sentence by CNN.

    The Gate Recurrent Unit (GRU) is a branch of the Recurrent Neural Network (RNN). Like LSTM, it is proposed to solve such problems as the gradient in long-term memory and reverse propagation. We choose to use GRU [13] in our model since it performs similarly to LSTM [14] but is computationally cheaper.

    The GRU model is defined by the following equations:

    ${z_t} = \sigma ({W_z}{x_t} + {U_z}{h_{t - 1}} + {b_z})$ (1)
    ${r_t} = \sigma ({W_r}{x_t} + {U_r}{h_{t - 1}} + {b_r})$ (2)
    ${\tilde h_t}\% = \tanh ({W_h}{x_t} + {U_h}({h_{t - 1}}*{r_t}) + {b_h})$ (3)
    ${\tilde h_t}\% = \tanh ({W_h}{x_t} + {U_h}({h_{t - 1}}*{r_t}) + {b_h})$ (4)

    In particular, ${z_t}$ and ${r_t}$ are vectors corresponding to the update and reset gates respectively, where * denotes elementwise multiplication. The activations of both gates are elementwise logistic sigmoid functions $\sigma ( \cdot )$, constraining the values of ${z_t}$ and ${r_t}$ ranging from 0 to 1. ${h_t}$ represents the output state vector for the current time framet, while ${\tilde h_t}$% is the candidate state obtained with a hyperbolic tangent. The network is fed by the current input vector${x_t}$(sentence representation of previous layer), and the parameters of the model are ${W_z}$, ${W_r}$, ${W_h}$ (the feed-forward connections), ${U_z}$, ${U_r}$, ${U_h}$(the recurrent weights), and the bias vectors ${b_z}$, ${b_r}$, ${b_h}$. The Gate Recurrent Unit (GRU) is shown in Figure 3.

    Figure 3.  The architecture of gate recurrent unit (GRU).

    Concerning rich entity mention and definition information containing in MDBs, the medical knowledge attention is applied to integrate entity representations learned from external knowledge bases as query vector for encoding. We use a medical dictionary to encode entity information (entity mention and definition) into attention scores as entity embeddings.

    $a_t^{} = f\left( {e{W_A}{h_t}} \right)$ (5)

    Where e is the embedding for entity, and WA is a bi-linear parameter matrix. We simply choose the quadratic function f(x) = x2, which is positive definite and easily differentiate.

    Based on the output of GRU and attention scoring, we design a gated multimodal fusion (GMF) method to fuse the features from output of hidden layerhtand attention scoringat. When predicting the entity tag of a character, the GMF trades off how much new information of the network is considering from the query vector with the EMR text containing the character. The GMF is defined as:

    ${h_{{a_t}}} = \tanh ({W_{{a_t}}}{a_t} + {b_{{a_t}}})$ (6)
    ${h_{{h_t}}} = \tanh ({W_{{h_t}}}{h_t} + {b_{{h_t}}})$ (7)
    ${g_t} = \sigma ({W_{{g_t}}}({h_{{a_t}}} \oplus {h_{{h_{\text{t}}}}}))$ (8)
    ${m_t} = {g_t}{h_{{a_t}}} + (1 - {g_t}){h_{{h_t}}}$ (9)

    where ${W_{{a_t}}}$, ${W_{{h_t}}}$, ${W_{{g_t}}}$ are parameters, ${h_{{h_t}}}$ and ${h_{{a_t}}}$ are the new sentence vector and new query vector respectively, after transformation by single layer perceptron.$ \oplus $ is the concatenating operation, σ is the logistic sigmoid activation, ${g_t}$ is the gate applied to the new query vector ${h_{{h_t}}}$, and ${m_t}$ is the multi-modal fused feature from the new medical knowledge feature and the new textual feature.

    When decoding the combination of the multimodal fusion feature mt at position t, the impact and necessity of the external medical knowledge feature for different assertion is different. Because the multimodal fusion feature contains external knowledge feature more or less and it may introduce some noise. We therefore use a filtering gate to combine different features from different signal that better represent the useful information. The filtering gate is a scalar in the range of [0, 1] and its value depends on how much the multimodal fusion feature is helpful to label the tag of the assertion. ${s_t}$and the input feature to the decoder ${\hat m_t}$ are defined as follows:

    ${s_t} = \sigma ({W_{{s_t},{h_{\text{t}}}}}{h_t} \oplus ({W_{{m_t},{s_t}}}{m_t} + {b_{{m_t},{s_t}}}))$ (10)
    ${u_t} = {s_t}(\tanh ({W_{{m_t}}}{m_t} + {b_{{m_t}}}))$ (11)
    ${\hat m_t} = {W_{\hat mt}}({h_t} \oplus {u_t})$ (12)

    where ${W_{{m_t},{s_t}}}$, ${W_{{s_t},{h_{\text{t}}}}}$, ${W_{{m_t}}}$, ${W_{\hat mt}}$ are parameters, ${h_t}$ is the hidden state of bidirectional LSTM at time t, ${u_t}$ is the reserved multimodal features after the filtering gate filter out noise, and$ \oplus $is the concatenating operation. The architecture of gated multimodal fusion and filtering gate are shown in Figure 1.

    After we get the representation ${\hat m_t}$ of sentence, we use softmax function to normalize and output entity assertion probability.

    In this section, we evaluate our method on a manually annotated dataset. Following Nadeau et al., we use Precision, Recall, and F1 to evaluate the performance of the models [18].

    We use our own manually annotated corpus as evaluation dataset, which consists of 800 de-identified EMR texts from different clinical departments of a grade-A hospital of second class in Gansu Province. The annotated entity number of every entity assertion category in the dataset is shown in the Table 2.

    Table 2.  Number statistics of different entity assertion categories in the evaluation dataset.
    Category Training Test Total
    Present (当前的) 2025 1013 3038
    Absent (否认的) 1877 921 2799
    Conditional (有条件的) 204 102 306
    Possible (可能的) 844 420 1264
    Family (非患者本人的) 235 117 352
    Occasional (偶有的) 249 147 396
    History (既往的) 342 171 513
    Total 5778 2889 8667

     | Show Table
    DownLoad: CSV

    We use Google's Word2Vec to train Chinese character embeddings on our 30 thousand unlabeled Chinese EMR texts which is from a grade-A hospital of second class in Gansu Province. Random generated character embeddings are initialized with uniform samples from$[ - \sqrt {\frac{3}{{dim}}} ,\sqrt {\frac{3}{{dim}}} ]$, where we set dim = 30.

    Table 3 gives the chosen hyper-parameters for all experiments. We tune the hyper-parameters on the development set by random search. We try to share as many hyper-parameters as possible in experiments.

    Table 3.  Parameter Setting.
    Parameter Value
    Character-level representation size 50
    Pretrained character Embedding Size 100
    Learning Size 0.014
    Decay Rate 0.05
    Dropout 0.5
    Batch Size 10
    CNN Window Size 3
    CNN Number of filters 50

     | Show Table
    DownLoad: CSV

    In this part, we describe all of models in the following experimental comparison.

    GRU+Softmax: We combine gated recurrent unit (GRU) neural network and Sfotmaxto classify assertion of clinical named entity. In this model, the GRU neural network is used to help encoding character embedding vector and then the Softmax layer is used to decode and classify. To compare the impact of different methods on experimental performance, we will use this model as the baseline.

    CNN+GRU+Softmax: This model is similar to the CNN-LSTM-CRF which was proposed by Ma and Hovy (2016)[15] and is a truly end-to-end system.

    CGAtS(CNN+GRU+Attention+Softmax): This model is the CNN-GRU-Softmax architecture enhanced by medical knowledge attention (MKA). In this model the output of hidden layer h and the attention score a are used to encode text representation as follows:

    $c = \sum\limits_{i = 1}^L {{a_i}*{h_i}} $ (13)

    where L is the window size of text characters.

    CGAtFuFiS(CNN+GRU+Softmax+All): This is our model. Unlike the previous one, we employed a gated multi-modal fusion (GMF) mechanism and a filtration gate.

    The performance on each of seven categories obtained by all models are shown in Figure 4, and their overall performance on the evaluation dataset is shown in Table 4.

    Figure 4.  Experimental results of different assertion categories.
    Table 4.  Performance of different models on the total evaluation dataset.
    Model Precision (%) Recall (%) F1 (%)
    Softmax 89.13 88.31 88.72
    GRU+Softmax(Baseline) 90.21 90.77 90.49
    CNN-GRU-Softmax 92.95 89.65 91.27
    CGAtS 90.76 93.34 92.03
    CGAtFuFiS 92.19 93.48 92.84

     | Show Table
    DownLoad: CSV

    We compare our model with the baseline. Table 4 shows the overall assertion classification performance obtained by our method and others, from which we can see that our model CGAtFuFiS obtains the best F1-score of 92.84%.

    The experimental results of different models on our manually annotated datasets are shown in Table 4 and 5. Compared with the baseline model, all other models have improved performance and the updated neural network model is better than the traditional machine learning methods on the MAC task.

    The convolution layer in convolution neural network can well describe the local features of characters, and the most representative part of the local features can be further extracted through the pooling layer. Therefore, our experimental results show that CNN-GRU-Softmax model is superior to GRU-CRF model.The performance of the CGAtS model is better than CNN-GRU-Softmax. This result shows that, the rich information of entities and their corresponding semantic definition from MDBs is surely useful for MAC. CGAtFuFiS model is slightly better than CGAtS model and indicates that for the clinical NER task in Chinese EMRs it is helpful to fuse the features from EMR text context with the external knowledge dictionary utilizing gated multimodal fusion (GMF). Since supplement of external information in MKBs sometimes causes noise to the model, we therefore use a filtering gate to combine and weight different features. As shown by the experimental results, the filtering gate is helpful to improve the overall performance of our model.

    Due to the sublanguage characteristic of Chinese EMRs, the expression of clinical named entity is very different from those in general text. Using the entity information contained in the MABs as the classification query vector can lead the decoder to focus on the entity itself. We combine text itself and MABs features together with a multi-modal fusion gate as the query vector, then set up a filtering gate to filter out useless feature information. The experimental results show that our model CGAtFuFiS, which integrates CNN, GRU, medical knowledge attention, gated multimodal fusion, filtering gate, and Softmax, achieves the best F1 score on the evaluation corpus.

    In this work, we proposed a medical knowledge-attention enhanced neural clinical entity assertion classification model, which makes use of the external MABs in the way of attention mechanism. A gated multi-modal fusion module is introduced to decide how much MABs information is fused into the query vector at each time step. We further introduced a filtering gate module to adaptively adjust how much multi-modal information can be considered at each time step. The experimental results on the manually annotated Chinese EMR evaluation dataset show that our proposed approach improved the performance of MAC task obviously compared to other baseline models.

    In the future, we will explore a fine-grained clinical entity classification model for Chinese EMRs and method to extract entity semantic relation in Chinese EMRs.

    We would like to thank the anonymous reviewers for their valuable comments. The research work is supported by the National Natural Science Foundation of China (NO. 61762081, No.61662067, No. 61662068) and the Key Research and Development Project of Gansu Province (No. 17YF1GA016).

    All authors declare no conflicts of interest in this paper.

    [1] Smart metering equipment technical specifications: Second version: Part 2. Department of Energy & Climate Change UK, Dec. 2013.
    [2] Armel KC, Gupta A, Shrimali G, et al. (2013) Is disaggregation the holy grail of energy efficiency? The case of electricity. Energy Policy 52: 213–234. doi: 10.1016/j.enpol.2012.08.062
    [3] Hart G, Nonintrusive Appliance Load Data Acquisition Method, MIT Energy Laboratory Technical Report, Sept. 1984.
    [4] Zeifman M, Roth K (2011) Nonintrusive appliance load monitoring: Review and outlook. IEEE Trans Consumer Electronics 57: 76–84. doi: 10.1109/TCE.2011.5735484
    [5] Zoha A, Gluhak A, Imran MA, et al. (2012) Non-intrusive load monitoring approaches for disaggregated energy sensing: A survey. Sensors 12: 16838–16866. doi: 10.3390/s121216838
    [6] Perez KX, Cole WJ, Baldea M, et al. (2014) Meters to models: Using smart meter data to predict home energy use. in Process. ACEEE Summer Study on Energy Efficiency in Buildings, Pacific Grove, CA.
    [7] Murray D, Liao J, Stankovic L, et al., A data management platform for personalised real-time energy feedback. in Proc EEDAL-2015 8th Int Conf Energy Efficiency in Domestic Appliances and Lighting, Lucerne-Horw, Switzerland, Aug. 2015.
    [8] Liao J, Elafoudi G, Stankovic L, et al. Power disaggregation for low-sampling rate data. 2nd Int. Non-intrusive Appliance Load Monitoring Workshop, Austin, TX, June 2014.
    [9] Marchiori A, Hakkarinen D, Han Q, et al. (2011) Circuit-level load monitoring for household energy management, IEEE Pervas Comput 10: 40-48.
    [10] Berges M, Goldman E, Matthews HS, et al. (2011) User-centered non-intrusive electricity load monitoring for residential buildings. J Comput Civil Eng 25: 471-480. doi: 10.1061/(ASCE)CP.1943-5487.0000108
    [11] Kim H, Marwah M, Arlitt M, et al., Unsupervised disaggregation of low frequency power measurements, in Proc 11th SIAM Int Conf Data Mining, Mesa, AZ, April 2011.
    [12] Parson O, Ghosh S, Weal M, et al. (2012) Non-intrusive load monitoring using prior models of general appliance types. in Proc. the 26th Conf. Artificial Intelligence (AAAI-12), Toronto, CA, pp. 356–362.
    [13] Kolter J, Jaakkola T (2012) Approximate inference in additive factorial HMMs with application to energy disaggregation. in J Machine Learning 22: 1472–1482.
    [14] Johnson MJ, Willsky AS (2013) Bayesian nonparametric Hidden Semi-Markov Models. J Machine Learning Research 14: 673–701.
    [15] Kolter J, Batra S, Ng AY, Energy Disaggregation via Discriminative Sparse Coding. in Proc Advances in Neural Inform Processing Sys 23 (NIPS 2010).
    [16] Shao H, Marwah M, Ramakrishnan NA, Temporal motif mining approach to unsupervised energy disaggregation. in Proc. the 1st Int Workshop Non-Intrusive Load Monitoring, Pittsburgh, PA, May 2012.
    [17] Elafoudi G, Stankovic L, Stankovic V, Power disaggregation of domestic smart meter readings using Dynamic Time Warping. ISCCSP-2014 IEEE Intl Symp Communications, Control, and Signal Processing, Athens, Greece, May 2014.
    [18] Altrabalsi H, Liao J, Stankovic L, et al., A low-complexity energy disaggregation method: Performance and robustness. SSCI-2014 IEEE Symp Comput Intelligence Applications in Smart Grid, Orlando, FL, Dec. 2014.
    [19] Xia XL, Lyu MR, Lok LM,et al., Methods of decreasing the number of support vectors via k-mean clustering. in Proc ICIC 2005, LNCS 3644, pp. 717–726, Spinger-Verlag Berlin Heidelberg, 2005.
    [20] Wang j, Wu x, Zhang C (2005) Support vector machines based on K-means clustering for realtime business intelligence systems. Int J Business Intelligence and Data Mining 1: 54–64.
    [21] Yao Y, Liu Y, Yu Y, et al. (2013) K-SVM: An effective SVM algorithm based on k-means clustering. J Computers 8: 2632–2639.
    [22] Gu Q, Jan J, Clustered Support Vector Machines. in Proc AISTATS-2013 16th Int Conf Artificial Intelligence and Statistics, Scottsdale, AZ, 2013.
    [23] Kolter J, Johnson M. REDD: A public data set for energy disaggregation research. in Workshop on Data Mining Applications in Sustainability (SIGKDD), San Diego, CA, 2011.
    [24] Monacchi A, Egarter D, Elmenreich W, et al. GREEND: An Energy Consumption Dataset of Households in Italy and Austria. in Proc IEEE SmartGridComm, Venice, Italy, Nov. 2014.
    [25] Gao J, Giri S, Kara EC, et al. ( 2014) PLAID: a public dataset of high-resolution electrical appliance measurements for load identification research: demo abstract. in Proc the 1st ACM Conference on Embedded Systems for Energy-Efficient Buildings, 198-199.
    [26] Ruzzelli AG, Nicolas C, Schoofs A, et al. (2010) Real-time recognition and profiling of appliances through a single electricity sensor. in Proc IEEE SECON-2010 7th Annual Conf Sensor Mesh and Ad Hoc Communications and Networks, 1–9.
    [27] Laughman C, Lee K,Cox R, et al. (2003) Power signature analysis. IEEE Power and Energy Magazine 1: 56–63.
    [28] Liang J, Ng SKK, Kendall G, et al. (2010) Load signature study part I: Basic concept, structure, and methodology. IEEE Trans Power Delivery 25: 551–560.
    [29] Berges M, Goldman E, Matthews HS, et al., Learning systems for electric consumption of buildings. in Proc 2009 ASCE Int Workshop Computing in Civil Engineering, Austin, TX, 2009.
    [30] Barker s, Kalra s, Irwin D, et al., NILM redux: The case for emphasizing applications over accuracy. NILM-2014 Workshop, Austin, TX, June 2014.
    [31] Markonin S, Bajic IV, Popowich F, Efficient sparse metric processing for nonintrusive load monitoring. 2nd Int Non-intrusive Appliance Load Monitoring Workshop, Austin, TX, June 2014.
    [32] Al-Harbi SH, Rayward-Smith VJ (2006) Adapting k-means for supervised clustering. Appl Intell 24: 219–226. doi: 10.1007/s10489-006-8513-8
  • This article has been cited by:

    1. Babita Pandey, Devendra Kumar Pandey, Brijendra Pratap Mishra, Wasiur Rhmann, A comprehensive survey of deep learning in the field of medical imaging and medical natural language processing: Challenges and research directions, 2021, 13191578, 10.1016/j.jksuci.2021.01.007
    2. Lizong Deng, Luming Chen, Tao Yang, Mi Liu, Shicheng Li, Taijiao Jiang, Constructing High-Fidelity Phenotype Knowledge Graphs for Infectious Diseases With a Fine-Grained Semantic Information Model: Development and Usability Study, 2021, 23, 1438-8871, e26892, 10.2196/26892
    3. Marta B. Fernandes, Navid Valizadeh, Haitham S. Alabsi, Syed A. Quadri, Ryan A. Tesh, Abigail A. Bucklin, Haoqi Sun, Aayushee Jain, Laura N. Brenner, Elissa Ye, Wendong Ge, Sarah I. Collens, Stacie Lin, Sudeshna Das, Gregory K. Robbins, Sahar F. Zafar, Shibani S. Mukerji, M. Brandon Westover, Classification of neurologic outcomes from medical notes using natural language processing, 2023, 214, 09574174, 119171, 10.1016/j.eswa.2022.119171
    4. Jin-ah Sim, Xiaolei Huang, Madeline R. Horan, Christopher M. Stewart, Leslie L. Robison, Melissa M. Hudson, Justin N. Baker, I-Chan Huang, Natural language processing with machine learning methods to analyze unstructured patient-reported outcomes derived from electronic health records: A systematic review, 2023, 146, 09333657, 102701, 10.1016/j.artmed.2023.102701
    5. Yu Zhang, Rui Xie, Iman Beheshti, Xia Liu, Guowei Zheng, Yin Wang, Zhenwen Zhang, Weihao Zheng, Zhijun Yao, Bin Hu, Improving brain age prediction with anatomical feature attention-enhanced 3D-CNN, 2024, 169, 00104825, 107873, 10.1016/j.compbiomed.2023.107873
  • Reader Comments
  • © 2016 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(10018) PDF downloads(1476) Cited by(58)

Figures and Tables

Figures(8)  /  Tables(8)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog