1.
Introduction
From the perspective of history and the past, we can distinguish three generations of knowledge management from each other. The period of 1990–1995 is known as the first generation of knowledge management. During this generation, many actions have been on the definition of knowledge management, the design of specialized knowledge management projects and the study of the potential benefits of knowledge management for business. It was based on addition; the progress in the field of artificial intelligence affected the research of knowledge management, especially in the guidance, representation and storage of knowledge (Manesh et al., 2020). The second generation of knowledge management appeared around 1996. In this way, many organizations considered new organizational positions for knowledge management, including senior knowledge manager (Abdelwhab et al., 2019). Different sources of knowledge management were combined with each other and quickly used in daily organizational discussions. During this generation, there were different definitions of knowledge such as commercial philosophies, systems, patterns, methods and activities and advanced technologies in knowledge management research. The second generation of knowledge management emphasizes that knowledge management is about systematic organizational change, where management methods, measurement systems, tools and content management need to be developed together (Ostendorf et al., 2022). As a result of new views and methods, now the third generation of knowledge management is emerging with new methods and new results. Regarding knowledge management, important indicators can be considered, among them honesty, responsibility and compassion. Also, organizational managers can turn to the implementation of a system that monitors data and gives a favorable perspective to managers and policy makers in the organizational and knowledge fields (Hamzehi & Hosseini, 2022). One of the suggested systems is business intelligence. Business intelligence is a broad term that includes tools, database architecture, data warehouses, performance management, methodologies, and so on, all of which are integrated into a software. The purpose of this system is to enable business managers and analysts throughout an organization to access any data quickly and easily in the company, and sometimes guide the analysis accordingly (Manesh et al., 2020). By analyzing past and present data, conditions, standards and performances, decision makers gain valuable insight that helps them make better decisions. Business intelligence has many capabilities that include: reporting and searching, complex analysis, data mining, forecasting, etc. (Ranjan & Foropon, 2021).
These capabilities have emerged from tools and technologies that commercial intelligence and especially information systems of senior executive managers, decision support systems, searches, data visualization, work sequence, implementation in operations, management sciences and applied artificial intelligence are the basics (Phillips-Wren et al., 2021). Agile business intelligence uses today's economic tools and powerful computers, networks, internet, etc. to measure and evaluate these (and other) technologies as high as possible to achieve goals. These technologies are integrated with other tools in the field of organizational planning. They can be done in a way that will be more useful for all stakeholders, which can also be useful in the field of organizational planning and knowledge management and organizational culture building (Harter et al., 2002).
Regarding the organizational program system of this equipment, it allows managers to have access to important and useful information about various organizational aspects, to communicate with each other and to work (Ostendorf et al., 2022). Data warehouses related to organizational data in the field of analytical tools (such as: analysis during processing and data mining) significantly increase the access to information and their analysis within an organizational scope (Hamzehi & Hosseini, 2022). This system can have tremendous effects on various areas of employees, including leadership. According to the explanations given in relation to agile business intelligence, the indicators and tools for the proposed implementation of the agile business intelligence system in the investigated organization, i.e. Information and Communication Technology Holding of Tehran, are as follows (Harter et al., 2002):
Management Dashboard: A tool that shows all key business performance indicators (KPI) in one place.
Data warehouse: A large collection of business data that helps organizations and businesses make more accurate and intelligent decisions. Data warehousing is not a new concept and has been around since the 1980s. A large amount of data in data warehouses is collected from various sources; internal applications such as marketing, sales, and finance are examples of these resources.
Extracting, transforming and loading data in the ETL data warehouse: The data warehouse is a set of data related to employees, extracted from operational and mechanized organizational systems, transformed to be compatible with the data, and loaded into the data warehouse. It is ready to be analyzed.
Online Analytical Processing: It is a set of graphic tools that allow the managers of the organizational field to have a multi-dimensional view of their data for analysis.
In the article (Manesh et al., 2020), the results indicate that the establishment of knowledge management in the organization can lead to the agile business intelligence of organizations. For their agile business intelligence, manufacturers should invest more in knowledge management. In research (Abdelwhab et al., 2019), the relationship between knowledge management and decision-making styles has been investigated. The research results show that there is a relationship between knowledge management and decision-making styles and the most effective dimension of knowledge management is knowledge management before action and the most effective dimension of decision-making styles is rational decision-making style. They also discussed efficiency and effectiveness in (Ostendorf et al., 2022). They came to the conclusion that in order to be able to have a continuous and stable presence in the competitive world with appropriate efficiency and effectiveness, organizations should operate around the axis of science and knowledge. In spite of the fact that knowledge as a resource is essential and vital for the survival of organizations and the condition of success of organizations achieves a deep knowledge and understanding at all levels (Hamzehi & Hosseini, 2022), but still many organizations have not paid serious attention to knowledge management. On the other hand, the effect of knowledge management on the performance of the organization is important with emphasis on decision-making styles (Ranjan & Foropon, 2021). The aim is to provide a framework that supports and presents the relationships between knowledge management empowerment factors (and organizational performance with the effect of the role of knowledge creation) (Phillips-Wren et al., 2021). Also, decision-making styles moderate the relationship between organizational performance and the knowledge creation process.
In study (Harter et al., 2002), knowledge acquisition, knowledge transformation, knowledge application and knowledge retention were investigated for knowledge management using machine learning. They came to the conclusion that except the process of knowledge transformation, the rest of the processes have a positive relationship with organizational performance. The use of deep learning when there is a lot of data has a scientific justification (Hlavac & Stefanovic, 2020) in this regard, and less research has been done in recent years.
Only studies (Shi & Wang, 2018; Kraus et al., 2020; Fombellida et al., 2020; Çelebi, 2021; Shrestha et al., 2021; Anisuzzaman et al., Doshi et al., 2023) conclude that there is a positive and significant correlation between decision-making using deep learning and knowledge management. According to the contents of this research, using deep learning to explain the indicators and tools of agile business intelligence in organization and its relationship with knowledge management will be examined in order to increase organizational quality.
The technology of machine learning has been in existence since 1990 when the data-driven approach paved the way for its development. During this time, there was a shift towards emphasizing natural language search and information retrieval. Additionally, the neural network, which was initially tested in 1957 for the first neural network computers, saw resurgence (Shi & Wang, 2018). The field of machine learning has seen both successes and failures, but it is expected to become more widespread in the near future (within 2 to 5 years). In order to keep up with the growing machine learning sector, it is crucial to increase the necessary infrastructure and technical capabilities that influence its progress. Deep Learning credited in 1965and utilized statically analyzed models featuring polynomial functions and complex equations. In 1995, a technique was developed to identify and map related or similar data. Long short-term memory for recurrent neural networks was subsequently introduced around 1997 (Kraus et al., 2020). The late 1990s saw the emergence of GPUs, which significantly enhanced computational speeds and photo processing efficiency by quadrupling speeds by 1000. In the early 2000s, various layers of pre-training and advancements in long short-term memory were implemented. By 2011, the increased processing power of GPUs enabled computers to efficiently operate on convolution neural networks without the need for layer-by-layer pre-training. Today, Deep Learning plays a crucial role in processing vast volumes of data. The fields of AI and Deep Learning continue to progress, leading to the development of more sophisticated concepts (Fombellida et al., 2020). The corporate landscape is undergoing rapid changes, with company procedures becoming increasingly complex, posing challenges for managers to fully comprehend the industry. Modernization, liberalization, acquisitions, mergers, contestability and technological advancements have compelled businesses to rethink their strategies. In order to gain a competitive edge, numerous large companies have turned to Business Intelligence (BI) techniques to aid in understanding and controlling their business processes (Çelebi, 2021). BI is predominantly used to enhance the quality and timeliness of information and assist managers in gaining a deeper understanding of their company's competitive position. Using BI tools and technology, companies can analyze fluctuating market share trends, shifts in consumer behavior and spending patterns, customer preferences, corporate capabilities and market conditions. By employing BI, analysts and managers can determine the most suitable adaptations to respond to changing patterns, thereby transforming it into a data analysis paradigm that supports decision-making units. With advancements in sophisticated computer hardware, machine learning emerged as an effective solution to numerous challenges faced by industries and societies (Anisuzzaman et al., 2022).
A new framework (Figure 1) is proposed to demonstrate how organizations can use Big Data analytics to enhance their business value. Previous studies have focused on various aspects of competitive intelligence (CI), such as the integration of BI and CI, scenario-based methods of CI analysis, social media-based competitive analysis, critical success factors for KM and CI, conceptual frameworks for salespeople and CI, graphical modeling for mining CI, the effectiveness of the CI spider tool and the impact of the internet on CI and organizations (Rath, 2021; Whig, 2019; Sreesurya et al., 2020). However, there is a lack of comprehensive and integrated Big Data method-based frameworks for CI processes in organizations. The strategic value of Big Data in CI is important for distinguishing rivals and partners. CI personnel face many questions, such as what data to collect, how to collect real-time data, and how to transform competitor data into meaningful patterns and knowledge (Sreesurya et al., 2020; Zarei & Zarei, 2018; Safari et al., 2018). Advanced Big Data methods can enable organizations to receive alerts on real-time market fluctuations, competitors' moves and customer mobility, and process the data into meaningful insights. Data preprocessing is essential before applying Big Data methods in CI cycles. Without proper data cleaning mechanisms, intelligence or smart insights cannot be built. Collecting CI data without quality and security layers would induce organizational mistrust, create data ownership conflicts, increase costs and lead to improper customer service, errors and outliers, time delays, etc. The CI data collection process should identify authoritative data sources and data entry points. Therefore, benchmarking CI processes using Big Data analytics is necessary. Big Data-enabled CI offers great business impact and benefits, such as creating new growth opportunities, being business ready in real-time situations, enabling faster responses to changes in marketplaces due to competitor movements and improving strategic plans by identifying potential vulnerabilities. Despite challenges related to data sensitivity, privacy and data-sharing, organizations need to invest in CI and use Big Data approaches to understand events in real time, identify challenging triggers in competitors' strategies and readjust internal promotions or policies accordingly.
With the emergence of sophisticated computer hardware, machine learning became a powerful solution for industries and societies. However, previous studies on competitive intelligence (CI) have focused on various aspects, but lack comprehensive frameworks for using Big Data analytics in the CI processes of organizations. There is a need to develop integrated frameworks that utilize Big Data methods for CI processes in organizations. Advanced Big Data methods can provide real-time alerts on market fluctuations, competitors' moves and customer mobility, and translate the data into meaningful insights. However, data preprocessing is essential to ensure the quality and security of the data collected.
Despite challenges related to data sensitivity, privacy and data-sharing, organizations need to invest in CI and utilize Big Data approaches to understand real-time events, identify challenging triggers in competitors' strategies and adjust internal policies accordingly. This investment can lead to new growth opportunities, faster responses to market changes and improved strategic plans. There is a research gap between the proposed work on using Big Data analytics in CI processes and the existing literature. Previous studies have focused on different aspects of CI but lack comprehensive frameworks for utilizing Big Data methods. The proposed framework aims to fill this gap by providing organizations with a comprehensive approach to enhance their business value through Big Data analytics in CI processes. By utilizing advanced methods and ensuring data quality and security, organizations can gain a competitive edge in the rapidly changing corporate landscape.
2.
Methods
2.1. Deep learning
Deep learning refers to neural networks with multiple layers. These networks can learn to recognize complex patterns in data by building up a hierarchy of features, where lower layers detect basic features such as edges and curves, and higher layers detect more complex features, such as facial features or objects (Shi & Wang, 2018). One common type of deep neural network is the convolution neural network (CNN), which is used for image recognition tasks. CNNs perform convolutions on the input data, which means they slide a small filter over the image and compute a dot product at each position. This process enables the network to detect local patterns in the image, such as edges and corners (Rath, 2021). Neural networks and deep learning have transformed the field of artificial intelligence, enabling computers to perform complex tasks such as image recognition, natural language processing and decision-making. The mathematical foundations of these technologies are critical to understanding how they work and how they can be used effectively. By applying concepts from linear algebra, calculus and probability theory, researchers have developed increasingly sophisticated networks that can learn to recognize complex patterns in data and make more sophisticated decisions.
2.1.1. CNN architecture
The best architecture for a CNN can vary depending on the specific task and data being used. However, one commonly used architecture is the VGG16 model which consists of 13 convolutional layers followed by three fully connected layers. The VGG16 model uses small filters (3x3) with a stride of 1 and max pooling layers (2x2) with a stride of 2. It also includes dropout layers to reduce over fitting and Batch Normalization layers to improve training stability. The output layer of the model is a softmax layer, which is used for multi-class classification tasks. The number of neurons in the output layer will depend on the number of classes being predicted. Overall, the VGG16 model has been found to perform well on a variety of image classification tasks and is often used as a starting point for further customization based on specific needs (Figure 2).
Convolutional Neural Network (CNN) VGG16: The VGG16 architecture is a popular choice for CNN models. It consists of 16 layers, including 13 convolutional layers and 3 fully connected layers. The convolutional layers are stacked one after another, with 3x3 filters and stride of 1. It uses max-pooling (2x2 window with stride 2) to reduce the spatial dimensions. The fully connected layers at the end of the network are responsible for the classification task. VGG16 is widely used for image recognition due to its simplicity and effectiveness.
Convolutional Neural Network (CNN) ResNet50: ResNet50 is another widely used CNN architecture. It is deeper than VGG16, consisting of 50 layers. The key innovation in ResNet50 is the introduction of residual connections, which allow the network to learn residual functions (the difference between input and output). This helps in training very deep networks and prevents the degradation of accuracy as the network depth increases. ResNet50 performs exceptionally well in image classification tasks and has been utilized in various domains.
Convolutional Neural Network (CNN) InceptionV3: InceptionV3 is an architecture that emphasizes the use of inception modules, which combine and parallelize convolutions of multiple sizes within the same layer. This allows the network to capture information at different scales and abstract levels. InceptionV3 utilizes 48 convolutional layers and achieves excellent performance with relatively fewer parameters compared to other architectures. It has been successfully utilized for various image recognition applications, including object detection and image classification.
2.2. Dataset
One of the key challenges facing businesses today is how to manage and analyze large amounts of data in a way that provides meaningful insights. Business intelligence (BI) tools have traditionally been used for this purpose, but with the rise of deep learning, there is now an opportunity to take this analysis to the next level. By applying deep learning techniques to business data, organizations can gain deeper insights into their operations, identify trends and patterns and make more informed decisions.
To apply deep learning to business intelligence, start with a high-quality dataset. There are several types of data that we use for organizational management, including:
1. Financial data: This includes data on revenue, expenses, profits and losses.
2. Operational data: This includes data on production processes, inventory levels, customer service metrics and other operational metrics.
3. Customer data: This includes data on customer demographics, purchase history, satisfaction levels and other customer metrics.
To ensure the reliability and clarity of the collected data, the same participants were interviewed multiple times using semi-structured interviews conducted through on-site visits and conversations. A diverse range of sources was used to improve the accuracy and completeness of the picture. The sources included textual accounts of debates and discussions to strengthen confidence in the findings. This approach was adopted to ensure that a comprehensive and accurate understanding of the subject was obtained.
2.3. Participants
The study involved participants from various organizations who were responsible for overseeing business intelligence and decision-making processes. The participants were drawn from different industries including healthcare, finance, retail and manufacturing. The primary material used in this study was a deep learning model designed specifically for enhancing business intelligence in organizational management. The model was developed using Python programming language with the help of Tensor Flow, Keras and other relevant libraries. Data was collected from various sources including customer feedback, sales, financial reports, employee satisfaction surveys, and social media data (Table 2).
2.4. Procedure
The study followed a four-step procedure:
Step 1: Data Collection: The first step in the process involved collecting data from various sources across the organization. This included data on customer feedback, sales, financial reports, employee satisfaction surveys and social media data. The data was collected over a period of six months.
Step 2: Preprocessing: The collected data was preprocessed to ensure that it was suitable for input into the deep learning model. This included tasks such as data cleaning, normalization and feature extraction.
Step 3: Model Development: The deep learning model was developed using Python programming language with the help of TensorFlow, Keras and other relevant libraries. The model was trained on the preprocessed data to identify patterns and relationships within the data.
Step 4: Evaluation and Deployment: The final step involved evaluating the performance of the deep learning model and deploying it in the organization. The evaluation was done by comparing the predictions made by the model with the actual outcomes. The model was deployed to assist decision-makers in making informed decisions based on accurate and reliable data.
Ethical Considerations: We ensured ethical considerations were met by obtaining informed consent from participants and ensuring that their data was kept confidential. Additionally, we followed ethical guidelines for the use of deep learning models in organizational management.
Limitations: The primary limitation of this study is the small sample size of participants drawn from various industries. This limits the generalization of the results. Additionally, we only focused on one deep learning model, limiting the exploration of other models that could enhance business intelligence in organizational management.
The dataset contains 12 variables, including the customer ID, session ID, timestamp, product ID, category ID and various other attributes related to the customer's interaction with the website. These attributes can be used to analyze customer behavior on the website, including what products customers viewed or added to their cart, how long they spent on the site and whether or not they made a purchase. This dataset is often used in research related to ecommerce customer behavior and can provide insights into how customers interact with online retailers. It can also be used to develop predictive models that can help businesses understand which customers are most likely to make a purchase and which products are most likely to be popular. Overall, the Ecommerce Customer Behavior Data provides a valuable resource for researchers and businesses interested in understanding the factors that influence customer behavior on ecommerce websites.
3.
Result
Data analysis was performed using Python programming language and relevant libraries such as NumPy and Pandas. The data was analyzed to identify patterns and relationships within the data, which were used to train the deep learning model.
We have divided our dataset into training and testing dataset and total 5000 data of different individual was in our dataset. We will show the outcome of the training and testing dataset. We have used machine learning (Random forest, Support vector machine and logistic regression) and CNN algorithm. First, we trained our dataset with these four algorithms and then we built a model.
3.1. Comparison between algorithms
A comparison of the algorithms utilized in the training dataset. The mean absolute average percent error is represented by a tall line, while the median value is shown with a rectangular box and the mean value is depicted by a brown line within the box. By analyzing this information, we can determine which algorithm is suitable for our mod.
Once the model has been trained, it is subjected to testing using a separate dataset. This dataset has been designated as the testing set and comprises 20% of the available data. The results of this testing phase have been captured in Table 3, which includes metrics such as accuracy, precision, recall and F1 score. Here are the detailed findings for the unigram model in terms of test data evaluation.
Table 3 above displays the performance metrics of different models (Figure 3) in terms of accuracy, precision, recall, and F1 score (the most important evaluation metrics in machine learning). The models included in the table are Random Forest, Support Vector Machine (SVM), Logistic Regression, and three different Convolutional Neural Networks (CNN) known as VGG16, Resnet50 and InceptionV3.
Starting with the Random Forest model, it achieved an accuracy of 57.07%. Accuracy refers to the overall correctness of the model's predictions. Alongside this, the precision of the Random Forest model is 62%, which signifies the ratio of correctly predicted positive observations to the total predicted positive observations. Moreover, the recall rate for this model is 57%, indicating the ratio of correctly predicted positive observations to the actual positive observations in the dataset. Lastly, the F1 score for the Random Forest model stands at 53%, which is a balanced measure combining precision and recall.
Moving on to the Support Vector Machine (SVM) model, it achieved a higher accuracy of 64.50% compared to the Random Forest model. SVM also demonstrated a precision rate of 65%, implying a high number of correctly predicted positive observations. Additionally, the recall rate for SVM is also 65%, indicating a good number of true positive predictions. Consequently, the F1 score for SVM is also 65%, which shows a favorable balance between precision and recall.
Next in the table is the Logistic Regression model, which achieved an accuracy of 63.63%. Logistic Regression's precision rate stands at 64%, indicating a high number of correctly predicted positive observations. Similarly, the recall rate for Logistic Regression is also 64%, suggesting a reasonable number of true positive predictions. The F1 score for this model is 64%, indicating a balanced measure of precision and recall.
Moving on to the three CNN models, the first one, CNN (VGG16), achieved the highest accuracy among all the models, standing at 88.32%. This demonstrates a highly accurate prediction capability. Nonetheless, the precision rate for this model is 84%, signifying a good number of correctly predicted positive observations. Additionally, the recall rate for VGG16 is 85%, indicating a high number of true positive predictions. Consequently, the F1 score for this CNN model is 86%, emphasizing the balanced measure of precision and recall.
The second CNN model, CNN (Resnet50), achieved an accuracy rate of 83.12%. Despite having a slightly lower accuracy than VGG16, the Resnet50 model demonstrated a higher precision rate of 86.8%, indicating a greater number of correctly predicted positive observations. However, the recall rate for Resnet50 is 82.2%, suggesting a lower number of true positive predictions compared to VGG16. The F1 score for Resnet50 is 84%, which still shows a good balance between precision and recall.
Lastly, the third CNN model, CNN (InceptionV3), achieved an accuracy rate of 83%. While its accuracy is similar to Resnet50, InceptionV3 demonstrated a precision rate of 85%, indicating a good number of correctly predicted positive observations. Nonetheless, the recall rate for this model is 82%, suggesting a lower number of true positive predictions compared to the precision. Consequently, the F1 score for InceptionV3 is 81%, which denotes a slightly lower balance between precision and recall compared to the other CNN models.
The results of our experiment indicate that the accuracy levels for both the training and testing datasets in machine learning method are very similar but CNN model has more accuracy and precision. The CNN algorithm performed better than other algorithms, achieving an accuracy rate of around 88% for used datasets. This suggests that the CNN algorithm will provide more precise predictions about BI and KM. Our main objective in this paper was to develop a model that accurately classifies customer people, and we are hopeful that this final model will deliver appropriate and reliable results. We used the keras package to obtain visualizations of the train and test loss and accuracy, using the History callback. This callback is automatically registered when training all deep learning models, recording various metrics including training and test accuracy (for classification problems), as well as loss and accuracy. These metrics are stored in a dictionary within the history object returned from calls to the fit function used to train the model.
3.
Conclusions
In this article, one of the most important issues in the field of knowledge management has been discussed. Although social media is one of the most important tools for disseminating knowledge, users often do not benefit from its useful information. In this article, a method to improve the dissemination of information is presented. This method consists of several steps, including data collection, data preprocessing, feature extraction and model learning. The learning method in this article is known as deep learning. To evaluate this model, two intuitive methods have been used to measure the extracted features and also to evaluate the data set to check the accuracy and F-measure.
In summary, the results provide a comparison of different models' performance metrics in terms of accuracy, precision, recall and F1 score. Based on the results depicted in the table, the CNN model using VGG16 architecture outperformed all the other models with the highest accuracy and a balanced F1 score. The SVM model also showed commendable performance across all metrics. However, it is important to note that these metrics alone may not fully capture the model's actual suitability for specific tasks, and other factors such as time complexity, interpretability and data requirements should also be considered while selecting a model.
We demonstrate the potential of using deep learning models to enhance business intelligence in organizational management. The results suggest that deep learning models can be trained on various sources of data to identify patterns and relationships within the data. These models can assist decision-makers in making informed decisions based on accurate and reliable data. Future research should explore the use of other deep learning models and expand the sample size to improve the generalization of the findings.
The limited data of the study might have encompassed a small sample size or a specific dataset, which limits the generalization of the findings. To overcome this limitation, future research should aim to collect more diverse and extensive datasets. Also, Deep learning models are often considered black-box models because they can be challenging to interpret. This lack of interpretability can make it difficult to understand the reasoning behind the model's decisions. Researchers should investigate methods to enhance interpretability, such as using explainable AI techniques. Deep learning models, especially with complex architectures like VGG16, can have high computational requirements and longer training times. This might make it impractical for real-time or resource-constrained applications. To address this limitation, researchers should explore optimization techniques or alternative models that offer faster inference times. The article mentions data preprocessing as one of the steps in the method. However, it does not elaborate on the specific challenges faced or the techniques used. Further research should provide more details on the preprocessing techniques employed to ensure the reliability and accuracy of the model. Deep learning models can inadvertently perpetuate and amplify existing biases in the data. It is crucial to address issues of fairness and bias in both the dataset and the model to ensure equitable decision-making. Future studies should explore strategies such as bias detection, data augmentation, or model regularization to mitigate bias.
Researchers should aim to collect more varied and extensive datasets from different sources to improve the model's generalization and capture a broader range of patterns and relationships within the data. Researchers should explore techniques, such as using explainable AI methods or leveraging interpretable models alongside deep learning models, to improve the transparency and understandability of the decision-making process. Researchers should investigate optimization techniques or consider alternative models with a balance between accuracy and computational efficiency. This can help reduce time complexity and make the model more practical for real-time or resource-constrained applications. Future studies should provide a detailed description of the data preprocessing techniques used. This documentation will help ensure transparency and reproducibility, allowing other researchers to replicate and validate the findings.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Conflict of interest
All authors declare no conflicts of interest in this paper.