Autosoft Journal

Online Manuscript Access


SEARCH DATABASE




PAPERS TO SHOW






Listing 1191 manuscripts in the database


Adaptive Image Enhancement using Hybrid Particle Swarm Optimization and Watershed Segmentation

by N. Mohanapriya, B. Kalaavathi
Abstract

Medical images are obtained straight from the medical acquisition devices so that, the image quality becomes poor and may contain noises. Low contrast and poor quality are the major issues in the production of medical images. Medical imaging enhancement technology gives way to solve these issues; it helps the doctors to see the interior portions of the body for early diagnosis, also it improves the features the visual aspects of an image for a right diagnosis. This paper proposes a new blend of Particle Swarm Optimization (PSO) and Accelerated Particle Swarm Optimization (APSO) called Hybrid Partial Swarm Optimization (HPSO) to enhance medical images and also gives optimal results. The work starts with (i) watershed segmentation followed by (ii) HPSO enhancement algorithm. The watershed segmentation is a morphological gradient-based transformation technique. The gradient map of an image has different gradient values corresponds to different heights. It extracts the continuous boundaries of each region to give solid results and intuitively provides better performance on noisy images. After segmentation, the HPSO algorithm is applied to improve the quality of Computed Tomography (CT) images by calculating the local and global information. The transformation function uses the calculated information to optimize the medical image. The algorithm is tested on a real-time data set of CT images, which were collected from MIT-BIH dataset and the performance is analyzed and compared with existing Region Merging (RM), Fuzzy C Means (FCM), Histogram Thresholding, Discrete Wavelet Transformation (DWT), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Histogram Equalization (HE), Contrast Stretching and Adaptive Filtering based on PSNR, SSIM, CII, MSE, RMSE, BER and Execution time parameters. The experimental result shows that the proposed medical image enhancement algorithm achieves 96.7% accuracy and defeat the over segmentation problem of existing systems.

Online Article

A New Rockburst Experiment Data Compression Storage Algorithm based on Big Data Technology

by Yu Zhang, Hong-Wei Ding, Yan-Ping Bai, Yong-Zhen Li, Zhao-Yong Lv, Yan-Ge Wang
Abstract

Rockburst phenomenon is a kind of phenomenon that the rock is out and ejected because the mineral was dug out, and the original force balance was destroyed in the process of mineral exploitation. From 2007, GeoLab (abbreviation of State Key Laboratory in China for GeoMechanics and Deep Underground Engineering) had made a series of important achievements in rockburst. Up to now, GeoLabu2019s rockburst experiment data is reached 800T, and these data may occupy about 2PB hard disk space after analyzed. At this ratio, GeoLab need to buy a new hard disk to save all these data every 46 hours rockburst experiment. Since there is not enough hard disk space to save all these data, GeoLab had to slow down the speed of do rockburst experiment and only analyzed about 4 percent of the data. We call this phenomenon a dilemma for data storage. This hindered the research process of rockburst phenomenon. We proposed a structure to obtain data from a cloud platform based on big data technology. And basing on this we analyzed the distribution characteristics of rockburst experiment data, data frequency and data frequency domain. And a new rockburst experiment data compression storage algorithm (NDCS) based on big data technology and cloud platform was proposed. Then we compared NDCS with WinRAR and BDSS by occupied disk space, compress ratio and consuming time. Theoretical analysis and experiments show that NDCS has the best performance of all three algorithms. NDCS is the most suitable data compression storage algorithm for rockburst, and it has successfully solved the data storage dilemma in rockburst experiment.

Online Article

Applying Probabilistic Model Checking to Path Planning in an Intelligent Transportation System Using Mobility Trajectories and Their Statistical Data

by Honghao Gao, Wanqiu Huang, Xiaoxian Yang
Abstract

Path planning is an important topic of research in modern intelligent traffic systems (ITSs). Traditional path planning methods aim to identify the shortest path and recommend this path to the user. However, the shortest path is not always optimal, especially in emergency rescue scenarios. Thus, complex and changeable factors, such as traffic congestion, road construction and traffic accidents, should be considered when planning paths. To address this consideration, the maximum passing probability of a road is considered the optimal condition for path recommendation. In this paper, the traffic network is abstracted as a directed graph. Probabilistic data on traffic flow are obtained using a mobile trajectory-based statistical analysis method. Subsequently, a probabilistic model of the traffic network is proposed in the form of a discrete-time Markov chain (DTMC) for further computations. According to the path requirement expected by the user, a point probability pass formula and a multiple-target probability pass formula are obtained. Probabilistic computation tree logic (PCTL) is used to describe the verification property, which can be evaluated using the probabilistic symbolic model checker (PRISM). Next, based on the quantitative verification results, the maximum probability path is selected and confirmed from the set of K-shortest paths. Finally, a case study of an emergency system under real-time traffic conditions is shown, and the results of a series of experiments show that our proposed method can effectively improve the efficiency and quality of emergency rescue services.

Online Article

LSTM Neural Network for Beat Classification in ECG Identity Recognition

by Xin Liu, Yujuan Si, Di Wang
Abstract

As a biological signal existing in the human living body, the electrocardiogram (ECG) contains abundantly personal information and fulfils the basic characteristics of identity recognition. It has been widely used in the field of individual identification research in recent years. The common process of identity recognition includes three steps: ECG signals preprocessing, feature extraction and processing, beat classification recognition. However, the existing ECG classification models are sensitive to limitations of database type and extracted features dimension, which makes classification accuracy difficult to improve and cannot meet the needs of practical applications. To tackle the problem, this paper proposes to build an ECG individual recognition model based on a deep Long Short-Term Memory (LSTM) neural network. The LSTM network model has a memory cell and, therefore, it is an expert in handling long time ECG signals. With deeper learning, the nonlinear expression ability of the ECG beat classification model is gradually enhancing. The paper adopts two stacked LSTM models as hidden layers in the neural network; the Softmax layer is used as a classification layer to identify an individual. Then, low-level morphological features and deep-level chaotic features (Lyapunov exponent) are extracted to verify the feasibility of the deep LSTM network for classification. The model is respectively applied to a healthy human database and a human with a heart disease database. Experimental results show that extracting simple low-level features and chaotic features both achieve better classification performance. So, the robustness of the LSTM classification model is verified.

Online Article

Detecting Android Inter-App Data Leakage Via Compositional Concolic Walking

by Tianjun Wu, Yuexiang Yang
Abstract

While many research efforts have been around auditing individual android apps, the security issues related to the interaction among multiple apps are less studied. Due to the hidden nature of Inter-App communications, few existing security tools are able to detect such related vulnerable behaviors. This paper proposes to perform overall security auditing using dynamic analysis techniques. We focus on data leakage as it is one of the most common vulnerabilities for Android applications. We present an app auditing system AppWalker, which uses concolic execution on a set of apps. We use static Inter-App taint analysis to guide the dynamic auditing procedure, so that we can target at potential Inter- App data leakage. To mitigate the exponential blow-up when auditing various combinations of apps, we introduce a novel technique called compositional concolic walking. In the end of the auditing, the event and data inputs created during concolic walking are fed to the app set. By dynamically checking the triggered data-leaking behavior, we are then able to confirm the existence of Inter-App data leakage. AppWalker takes into account both intra- and inter-app communications, and is the first research work on dynamic audit of inter-app vulnerabilities in a path-sensitive way to our knowledge. Experimental results reveal that our method can effectively detect real-world Inter-App data leakage.

Online Article

SEM-based Research on Influence Factors of Energy Conservation in Operation and Maintenance of Construction Project

by Liang Zhao, Wenshun Wang, Wei Zhang
Abstract

The energy consumption in operation and maintenance stage of construction project accounts for 80% of the whole project life cycle and therefore research on influence factors of energy consumption in project operation and maintenance stage is of great practical significance. Based on data of 260 valid questionnaires in Jiangsu area, structural equation model is adopted in the Thesis for empirical research on influence factors in operation and maintenance stage. Based on theoretical analysis and factor analysis, the conceptual model and research hypothesis for influence factors of energy consumption in operation and maintenance stage are proposed to establish structural equation model and to carry out empirical test on conceptual model as well as research and construction. The research results show that: (1) the government policy has obvious positive effects on selection and adoption of energy-saving technology as well as energy conservation will of project participates (2) cognition of the public has obvious effects on energy conservation will, energy- saving technology has obvious effects on facility management and facility management has obvious effects on usersu2019 cognition (3) the direct influences of government policy on energy-saving technology and usersu2019 cognition have not reached the significance level specified in statistics and the direct influences of facility management and energy-saving technology on energy conservation will have also not reached the significance level specified in statistics.

Online Article

Genetic Algorithm and Tabu Search Memory with Course Sandwiching (GATS_CS) for University Examination Timetabling

by Abayomi-Alli Adebayo, Sanjay Misra, Luis Fernández-Sanz, Abayomi-Alli Olusola
Abstract

Abstract: Educational Institutions use timetables to maximize and optimize scarce resources like time and space when scheduling classes, lectures, examinations or events. This makes University timetable scheduling a complicated constraint problem. This study aims to develop an examination timetable system for a University to solve the problems associated with the present manual technique such as un-allocation of courses, course clashes, course duplication, multiple examinations per hall, elongated and laborious reviews, etc. The hybrid of meta-heuristics procedures: Genetic Algorithm (GA) and Tabu Search (TS) memory was employed along with course sandwiching (GATS_CS) to develop an improved automatic University examination timetable solution. For implementation, a case study of a public University in Nigeria was used. The concepts of Genetic Algorithm: Selection, Evaluation were implemented while memory properties of TS and course sandwiching was used to replaced genetic operators of Crossover and Mutation. Course allocation and hall scheduling was optimized based on defined constraints for the case study University while GAT_CS was implemented in Java. Result obtained showed that GAT_CS allocated 96.07% and 99.02% of the total courses to exam halls, un-allocated 3.93% and 0.98% for first and second semesters, respectively. Also, in several instances it automatically sandwiched (scheduled) multiple examinations into a single hall without exceeding the hall's exam sitting capacity. GAT_CS also outperformed Particle Swarm Optimization and Local Search (PSO_LS) and Generic Algorithm (GA) based timetable implementations when benchmarked with the same timetable dataset. The system could be improved to reduce clashes, duplication, multiple examinations and accommodate more system defined constraints for future directions.

Online Article

Developing a Holistic Model for Assessing the ICT Impact on Organizations: A Managerial Perspective

by Farrukh Saleem, Naomie Salim, Abdulrahman Altalhi, Abdullah Al-Malaise Al-Ghamdi, Zahid Ullah, Noor ul Qayyum
Abstract

Organizations are currently more dependent on Information and Communication Technology (ICT) resources. The main purpose of this research is to help the organization in order to maintain the quality of their ICT project based on evaluation criteria presented in this research. This paper followed several steps to support the methodology section. Firstly, an experimental investigation conducted to explore the values assessment criterion, an organization may realize from ICT project such as information systems, enterprise systems and IT infrastructure. Secondly, the investigation is further based on empirical data collected and analyzed from the respondents of six case studies using questionnaire based on the findings of literature review. Finally this paper propose the development of a holistic model for assessing business values of ICT from the managerial point of view based on measured factors. The study has contributed in this field practically and theoretically, as the literature has not shown a holistic approach of used eight distinct dimensions for assessing ICT impact over business values. It has combined the previous researches in a manner to extend the dimensions of measuring ICT business values. The model has shown its significance for managers and ICT decision makers to align between business strategies and ICT strategies. The findings suggest that ICT positively support business processes and several other business values dimensions. The proposed holistic model and identified factors can be useful for managers to measure the impact of emerging ICT on business and organizational values.

Volume: 25, Issue: 2

Intrusion Detection and Anticipation System (IDAS) for IEEE 802.15.4 Devices

by Usman Tariq
Abstract

Wireless Sensor Networks (WSNs) empower the reflection of the environment with an extraordinary resolve. These systems are a combination of several minuscule squat-cost, and stumpy-power on-chip transceiver sensing motes. Characteristically, a sensing device comprises of four key gears: an identifying element for data attainment, a microcontroller for native data dispensation, a message component to permit the broadcast/response of data to/from additional associated hardware, and lastly, a trivial energy source. Near field frequency series and inadequate bandwidth of transceiver device drags to multi-stage data transactions at minimum achievable requirements. State of art, and prevailing operating systems, such as TinyOS (Levis, et.al. 2005), Contiki (Dunkels, et.al. 2004), (MANTIS) (Bhatti, et.al. 2005) and Nano-RK (Eswaran, et.al. 2005) have the amenities which they can provide to convey novel prospects to aggressors toward conceding the hardware and the facts kept on it. This is laterally through the upsurge of portable malware which is projected to contain a somber risk in the adjacent times. Consequently, the researchers are regularly looking for explanations to handle these afresh-familiarized threats. Therefore, a necessity for smart and useful defence panels, such as Intrusion Detection and Anticipation Systems (IDAS) is a compulsory consideration. Nevertheless, at the same time as considerable exertion has been fervent to moveable intrusion detection system, study on variance-oriented or performance-oriented IDS has been imperfect parting some glitches unresolved. Reviewed IDS method is projected and assessed in the framework of the contemporary literature which is proficient in sensing innovative but undocumented malware or illicit practice of amenities. This is accomplished by offering constant validation to guarantee genuine practice of the hardware and avoid risks via smart upright-validation and nonrepudiation rejoinder method. This is validated by the tentative outcomes that confirm the effectiveness of the projected methodology.

Volume: 25, Issue: 2

A Novel Privacy-preserving Multi-attribute Reverse Auction Scheme with Bidder Anonymity using Multi-server Homomorphic Computation

by Wenbo Shi, Jiaqi Wang, Jinxiu Zhu, YuPeng Wang, Dongmin Choi
Abstract

With the further development of Internet, the decision-making ability of the smart service is getting stronger and stronger, and the electronic auction is paid attention to as one of the ways of decision system. In this paper, a secure multi-attribute reverse auction protocol without the trusted third party is proposed. It uses the Paillier public key cryptosystem with homomorphism and combines with oblivious transfer and anonymization techniques. A single auction server easily collides with a bidder, in order to solve this problem, a single auction server is replaced with multiple auction servers. The proposed scheme uses multiple auction servers to calculate the attributes under encryption protection and obtains the linear additive score function value finally. Since the attribute is calculated under the protection of encryption, the proposed scheme achieves privacy-preserving winner determination with bid privacy. Furthermore, the proposed scheme uses oblivious transfer and anonymization techniques to achieve bidder anonymity. In accordance with the security analysis, major properties, bidder anonymity and somewhat reducing collusion possibilities, are provided under the semi-honest model. According to a comparison of computation, the proposalu2019s computation cost is reasonable.

Volume: 25, Issue: 1

The Design and Implementation of a Multidimensional and Hierarchical Web Anomaly Detection System

by Jianfeng Guan, Jiawei Li, Zhongbai Jiang
Abstract

The traditional web anomaly detection systems face the challenges derived from the constantly evolving of the web malicious attacks, which therefore result in high false positive rate, poor adaptability, easy over-fitting, and high time complexity. Due to these limitations, we need a new anomaly detection system to satisfy the requirements of enterprise-level anomaly detection. There are lots of anomaly detection systems designed for different application domains. However, as for web anomaly detection, it has to describe the network accessing behaviours characters from as many dimensions as possible to improve the performance. In this paper we design and implement a Multidimensional and Hierarchical Web Anomaly Detection System (MHWADS) with the objectives to provide high performance, low latency, multi-dimension and adaptability. MHWADS calculates the statistical characteristics, and constructs the corresponding statistical model, detects the behaviour characteristics to generate the multidimensional correlation eigenvectors, and adopts several classifications to build an ensemble model. The system performance is evaluated based on realistic dataset, and the experimental results show that MHWADS yields substantial improvements than the previous single model. More important, by using 2-fold Stacking as the ensemble architecture, the detection precision and recall are 0.99988 and 0.99647, respectively.

Volume: 25, Issue: 1

Surgical Outcome Prediction in Total Knee Arthroplasty using Machine Learning

by Belayat Hossain, Takatoshi Morooka, Makiko Okuno, Manabu Nii, Shinichi Yoshiya, Syoji Kobashi
Abstract

This work aimed to predict postoperative knee functions of a new patient prior to total knee arthroplasty (TKA) surgery using machine learning, because such prediction is essential for surgical planning and for patients to better understand the TKA outcome. However, the main difficulty is to determine the relationships among individual varieties of preoperative and postoperative knee kinematics. The problem was solved by constructing predictive models from the knee kinematics data of 35 osteoarthritis patients, operated by posterior stabilized implant, based on generalized linear regression (GLR) analysis. Two prediction methods (without and with principal component analysis followed by GLR) along with their sub-classes were proposed, and they were finally evaluated by a leave-one-out cross-validation procedure. The best method can predict the postoperative outcome of a new patient with a Pearson's correlation coefficient (cc) of 0.84 ± 0.15 (mean±SD) and a root-mean-squared-error (RMSE) of 3.27±1.42 mm for anterior-posterior vs. flexion/extension (A-P pattern), and a cc of 0.89±0.15 and RMSE of 4.25±1.92° for valgus-varus vs. flexion/extension (v-v pattern). Although these were validated for one type of prosthesis, they could be applicable to other implants, because the definition of knee kinematics, measured by a navigation system, is appropriate for other implants.

Volume: 25, Issue: 1

An Efficient Optimized Handover in Cognitive Radio Networks using Cooperative Spectrum Sensing

by H. Anandakumar, K. Umamaheswari
Abstract

Cognitive radio systems necessitate the incorporation of cooperative spectrum sensing among cognitive users to increase the reliability of detection. We have found that cooperative spectrum sensing is not only advantageous, but is also essential to avoid interference with any primary users. Interference by licensed users becomes a chief concern and issue, which affects primary as well as secondary users leading to restrictions in spectrum sensing in cognitive radios. When the number of cognitive users increases, the overheads of the systems, which are meant to report the sensing results to the common receiver, which becomes massive. When the spectrum, which is in use becomes unavailable or when the licensed user takes the allocated band, these networks have the capability of changing their operating frequencies. In addition, cognitive radio networks are seen to have the unique capability of sensing the spectrum range and detecting any spectrum, which has been left underutilized. Further this capability of recognizing the spectrum range based on the dimensions detected, allows for determination of the band, which may be utilized. The main objective of this paper is to analyze the cognitive radio's spectrum sensing ability and evolving a self-configured system with dynamic intelligence networks without causing any interference to the primary user. The paper also brings focus to the quantitative analysis of the two spectrum sensing techniques namely; Energy Detection and Band Limited White Noise Detection. The estimation technique for detecting spectrum noise is based on the detection of probability and probability of false alarms at different Signal-to-Noise Ratio (SNR) levels using Additive White Gaussian Noise signal (AWGN). The efficiency of the proposed Cooperative CUSUM spectrum sensing algorithm performs better than existing optimal rules based on a single observation spectrum sensing techniques under cooperative networks.

Volume: 24, Issue: 4

Performance Analyses of Nature-inspired Algorithms on the Traveling Salesman’s Problems for Strategic Management

by Julius Beneoluchi Odili, Mohd Nizam Mohmad Kahar, A. Noraziah, M. Zarina, Riaz Ul Haq
Abstract

This paper carries out a performance analysis of major Nature-inspired Algorithms in solving the benchmark symmetric and asymmetric Traveling Salesman’s Problems (TSP). Knowledge of the workings of the TSP is very useful in strategic management as it provides useful guidance to planners. After critical assessments of the performances of eleven algorithms consisting of two heuristics (Randomized Insertion Algorithm and the Honey Bee Mating Optimization for the Travelling Salesman’s Problem), two trajectory algorithms (Simulated Annealing and Evolutionary Simulated Annealing) and seven population-based optimization algorithms (Genetic Algorithm, Artificial Bee Colony, African Buffalo Optimization, Bat Algorithm, Particle Swarm Optimization, Ant Colony Optimization and Firefly Algorithm) in solving the 60 popular and complex benchmark symmetric Travelling Salesman’s optimization problems out of the total 118 as well as all the 18 asymmetric Travelling Salesman’s Problems test cases available in TSPLIB91. The study reveals that the African Buffalo Optimization and the Ant Colony Optimization are the best in solving the symmetric TSP, which is similar to intelligence gathering channel in the strategic management of big organizations, while the Randomized Insertion Algorithm holds the best promise in asymmetric TSP instances akin to strategic information exchange channels in strategic management.

Volume: 24, Issue: 4

Analysis of Collaborative Brain Computer Interface (BCI) based personalized GUI for differently abled

by M. Uma, T. Sheela
Abstract

Brain-Computer Interfaces (BCI) use Electroencephalography (EEG) signals recorded from the brain scalp, which enable a communication between the human and the outside world. The present study helps the patients who are people locked-in to manage their needs such as accessing of web url’s, sending/receiving sms to/from mobile device, personalized music player, personalized movie player, wheelchair control and home appliances control. In the proposed system, the user needs are designed as a button in the form of a matrix, in which the main panel of rows and columns button is flashed in 3 sec intervals. Subjects were asked to choose the desired task/need from the main panel of the GUI by blinking their eyes twice. The double eye blink signals extracted by using the bio-sensor of NeuroSky’s mind wave device with portable EEG sensors are used as the command signal. Each task is designed and implemented using a Matlab tool. The developed Personalized GUI application collaborated with the EEG device accesses the user’s need. Once the system identifies the desired option through the input control signal, the appropriate algorithm is called and performed. The users can also locate the next required option within the matrix. Therefore, users can easily navigate through the GUI Model. A list of personalized music, movies, books and web URL’s are preloaded in the database. Hence, it could be suitable to assist disabled people to improve their quality of life. Analysis of variance (ANOVA) is also carried out to find out the significant signals influencing a user’s need in order to improve the motion characteristics of the brain computer interface based system.

Volume: 24, Issue: 4

Association Link Network based Concept Learning in Patent Corpus

by Wei Qin, Xiangfeng Luo
Abstract

Concept learning has attracted considerable attention as a means to tackle problem of representation and learning corpus knowledge. In this paper, we investigate a challenging problem to automatically construct patent concept learning model. Our model consist of two main processes, which are the acquisition of the initial concept graph and refine process for initial concept graph. The learning algorithm of patent concept graph is designed based on Association Link Network (ALN). A concept is usually described by multiple document, enable ALN to be used in concept learning, we propose mixture-ALN, which add links between document and lexical level, compared with ALN. Then, a heuristic algorithm is proposed to refine the concept graph which could learn a more concise and simpler knowledge for concept. The heuristic algorithm consists of four phases, firstly, for simplifying bag of words for concept in patent corpus, we start to select core node from initial concept graph. Secondly, for learning the association rule for concept, we search important association rules around core node in our rules collection. Thirdly, ensure coherent semantics of the concept, we select corresponding documents based on the selected association rules and words. Finally, for enriching semantics of refined concept, we iteratively select core nodes based on corresponding documents and restart our heuristic algorithm. In the experiments, our model shows effectiveness and improvement in prediction accuracy in retrieve task of patent.

Volume: 24, Issue: 3

The Data Analyses of Vertical Storage Tank using Finite Element Soft Computing

by Lin Gao, Mingzhen Wang
Abstract

With the rapid development of petrochemical industry, the number of large-scale oil storage tanks has increased significantly, and many storage tanks are located in potential seismic regions. It is very necessary to analyze seismic response of oil storage tanks since their damage in an earthquake can lead to seriously disasters and losses. In this paper, three models of vertical cylindrical oil storage tank in different sizes which are commonly used in practical engineering are established. The dynamic characteristics, sloshing wave height and hydrodynamic pressure of oil tank considering liquid-structure coupling effect are analyzed by using ADINA finite element software, which are compared with the result of standard method. The close numerical values of both results have verified the correctness and reliability of finite element model. The analytic results show that liquid sloshing wave height is basically in direct proportion to ground motion peak acceleration, the standard method of portion sloshing wave height calculation is not conservative. The hydrodynamic pressure generated by liquid sloshing caused by ground motion is not negligible compared with the hydrostatic pressure. The tank radius and oil height have a significant effect on the numerical value of hydrodynamic pressure. The ratio of hydrodynamic pressure and hydrostatic pressure, which is named hydraulic pressure increase coefficients, is related to the height, which given by the GB 50341-2014 code in China have a high reliability. The seismic performances of tank wall near the bottom need to be enhanced and improved in the seismic design of oil tank.

Volume: 24, Issue: 3

The SLAM Algorithm for Multiple Robots based on Parameter Estimation

by MengYuan Chen
Abstract

With the increasing number of the feature points of the map, the dimension of systematic observation is added gradually, which leads to the deviation of the volume points from the desired trajectory and significant errors on the state estimation. An Iterative Squared-Root Cubature Kalman Filter (ISR-CKF) algorithm proposed is aimed at improving SR-CKF algorithm on simultaneous localization and mapping (SLAM). By introducing the method of iterative updating, the sample points are re-determined by the estie re-determined by the estimated value and the square root factor, which keeps the distortion small in the highly nonlinear environment and improves the precision further. A robust tracking Square Root Cubature Kalman Filter algorithm (STF-SRCKF-SLAM) is proposed to solve the problem of reduced accuracy in the condition of state change on SLAM. The algorithm is predicted according to the kinematic model and observation model of the mobile robot at first, and then the algorithm updates itself by spreading the square root of the error covariance matrix directly, which greatly reduces the computational complexity. At the same time, the time-varying fading factor is introduced in the process of forecasting and updating, and the corresponding weight of the data is adjusted in real time to improve the accuracy of multi-robot localization. The results of simulation shows that the algorithm can improve the accuracy of multi-robot pose effectively.

Volume: 24, Issue: 3

The Virtual Prototype Model Simulation on the Steady-state Machine Performance

by Huanyu Zhao, Guoqiang Wang, Shuai Wang, Ruipeng Yang, He Tian, Qiushi Bi
Abstract

Articulated tracked vehicles have high mobility and steering performance. The unique structure of articulated tracked vehicles can avoid the subsidence of tracks caused by high traction from instantaneous braking and steering. In order to improve the accuracy of the steady-state steering of the articulated tracked vehicle, the velocity of both sides of the track and the deflection angle of the articulated point need to match better, to achieve the purpose of steering accurately and reducing energy consumption and wear of components. In this study, a virtual prototype model of the articulated tracked vehicle is established based on the multi-body dynamic software RecurDyn. The trend of the driving torque and power of each track changes as the velocity difference of two sides of the tracks and the traveling trajectory of the mass center of the front vehicle change in a specific condition are obtained by the experiment. The experimental results are compared and verified with the results obtained from the virtual prototype simulation. The change law of driving power in the steady-state steering process on the horizontal firm ground as changing the velocity difference of two sides of the tracks, the theoretical steering radius, and the ground friction is obtained by the virtual prototype model simulation analysis. The steering inaccuracy and track slip rate are used as indexes in evaluating the steady-state steering performance of the articulated tracked vehicle. The research provides references for the study of steady-state steering performance of articulated tracked vehicles.

Volume: 24, Issue: 3

Big Data based Self-Optimization Networking: A Novel Approach Beyond Cognition

by Amin Mohajer, Morteza Barari, Houman Zarrabi
Abstract

It is essential to satisfy class-specific QoS constraints to provide broadband services for new generation wireless networks. A self-optimization technique is introduced as the only viable solution for controlling and managing this type of huge data networks. This technique allows control of resources and key performance indicators without human intervention, based solely on the network intelligence. The present study proposes a big data based self optimization networking (BD-SON) model for wireless networks in which the KPI parameters affecting the QoS are assumed to be controlled through a multi-dimensional decision-making process. Also, Resource Management Center (RMC) was used to allocate the required resources to each part of the network based on made decision in SON engine, which can satisfy QoS constraints of a multicast session in which satisfying interference constraints is the main challenge. A load-balanced gradient power allocation (L-GPA) scheme was also applied for the QoS-aware multicast model to accommodate the effect of transmission power level based on link capacity requirements. Experimental results confirm that the proposed power allocation techniques considerably increase the chances of finding an optimal solution. Also, results confirm that proposed model achieves significant gain in terms of quality of service and capacity along with low complexity and load balancing optimality in the network.

Volume: 24, Issue: 2

Comparative study of prey predator algorithm and firefly algorithm

by Hong Choon Ong, Surafel Luleseged Tilahun, Wai Soon Lee, Jean Meadard T Ngnotchouye
Abstract

Metaheuristic algorithms are found to be promising for difficult and high dimensional problems. Most of these algorithms are inspired by different natural phenomena. Currently, there are hundreds of these metaheuristic algorithms introduced and used. The introduction of new algorithm has been one of the issues researchers focused in the past fifteen years. However, there is a critic that some of the new algorithms are not in fact new in terms of their search behavior. Hence, a comparative study in between existing algorithms to highlight their differences and similarity needs to be studied. Apart from knowing the similarity and difference in search mechanisms of these algorithms it will also help to set criteria on when to use these algorithms. In this paper a comparative study of prey predator algorithm and firefly algorithm will be discussed. The discussion will also be supported by simulation results on selected twenty benchmark problems with different properties. A statistical analysis called Mann—Whitney U 2 test is used to compare the algorithms. The theoretical as well as simulation results support that prey predator algorithm is a more generalized search algorithm, whereas firefly algorithm falls as a special case of prey predator algorithm by fixing some of the parameters of prey predator algorithm to certain values.

Volume: 24, Issue: 2

The challenge of the Paris Agreement to contain climate change

by E. Grigoroudis, F. Kanellos, V. S. Kouikoglou, Y. A. Phillis
Abstract

Climate change due to anthropogenic CO 2 and other greenhouse gas emissions has had and will continue to have widespread negative impacts on human society and natural ecosystems. Drastic and concerted actions should be undertaken immediately if such impacts are to be prevented. The Paris Agreement on climate change aims to limit global mean temperature below 2 °C compared to the pre-industrial level. Using simulation and optimization tools and the most recent data, this paper investigates optimal emissions policies satisfying certain temperature constraints. The results show that only if we consider negative emissions coupled with drastic emissions reductions, temperature could be stabilized at about 2.5 °C, otherwise higher temperatures could possibly occur. To this end, two scenarios are developed based on the national emissions reduction plan of China and the USA. According to the simulation results, the objective of keeping temperature rise under 2 °C cannot be met. Clearly, negative emissions are needed if the Paris targets are to be given a chance for success. However, the feasibility of negative emissions mainly depends on technologies not yet developed. Reliance on future technological breakthroughs could very well prove unfounded and provide excuses for continued carbon releases with possible severe and irreversible climate repercussions. Thus, the Paris Agreement needs immediate amendments that will lead to stronger mitigation and adaptation commitments if it is to stay close to its goals.

Volume: 24, Issue: 2

Middleware for Internet of Things: Survey and Challenges

by Samia Allaoua Chelloug, Mohamed A. El-Zawawy
Abstract

The Internet of things (IoT) applications span many potential fields. Furthermore, smart homes, smart cities, smart vehicular networks, and healthcare are very attractive and intelligent applications. In most of these applications, the system consists of smart objects that are equipped by sensors and Radio Frequency Identification (RFID) and may rely on other technological computing and paradigm solutions such as M2 M (machine to machine) computing, Wifi, Wimax, LTE, cloud computing, etc. Thus, the IoT vision foresees that we can shift from traditional sensor networks to pervasive systems, which deliver intelligent automation by running services on objects. Actually, a significant attention has been given to designing a middleware that supports many features; heterogeneity, mobility, scalability, multiplicity, and security. This papers reviews the-state-of-the-art techniques for IoT middleware systems and reveals an interesting classification for these systems into service and agent-oriented systems. Therefore two visions have emerged to provide the IoT middleware systems: Via designing the middleware for IoT system as an eco-system of services or as an eco-system of agents. The most common feature of the two approaches is the ability to overcome heterogeneity issues. However, the agent approach provides context awareness and intelligent elements. The review presented in this paper includes a detailed comparison between the IoT middleware approaches. The paper also explores challenges that form directions for future research on IoT middleware systems. Some of the challenges arise, because some crucial features are not provided (or at most partially provided) by the existing middleware systems, while others have not been yet tackled by current research in IoT.

Volume: 24, Issue: 2

A Hybrid Modular Context-Aware Services Adaptation For A Smart Living Room

by Moeiz Miraoui, Sherif El-etriby, Chakib Tadj, Abdulbasit Zaid Abid
Abstract

Smart spaces have attracted considerable amount of interest over the past few years. The introduction of sensor networks, powerful electronics and communication infrastructures have helped a lot in the realization of smart homes. The main objective of smart homes is the automation of tasks that might be complex or tedious for inhabitants by distracting them from concentrating on setting and configuring home appliances. Such automation could improve comfort, energy savings, security, and tremendous benefits for elderly persons living alone or persons with disabilities. Context awareness is a key enabling feature for development of smart homes. It allows the automation task to be done pro-actively according to the inhabitant’s current context and in an unobtrusive and seamlessly manner. Although there are several works conducted for the development of smart homes with various technologies, in most cases, robust. However, the context-awareness aspect of services adaptation was not based on clear steps for context elements extraction (resp. clear definition of context). In this paper, we use the divide and conquer approach to master the complexity of automation task by proposing a hybrid modular system for context-aware services adaptation in a smart living room. We propose to use for the context-aware adaptation three techniques of machine learning, namely Naïve Bayes, fuzzy logic and case-based reasoning techniques according to their convenience.

Volume: 24, Issue: 2

Research On Application Of Location Technology In 3d Virtual Environment Modelling System For Substation Switch Indicator

by Lijuan Qin, Ting Wang, Chen Yao
Abstract

Substation inspection work plays a very important role in ensuring the normal production and the safe operation of a transformer substation. Use of a substation inspection robot can effectively solve the omission problems of a manual inspection. It can further improve the unmanned and automation of substation and improve security of station. A substation inspection system requires the establishment of a spatial position relation between the robot and the inspection object in the monitoring station. Key technology is the spatial location of the object being detected in the substation. The position of inspection object is unknown. This paper gives technical route of monocular vision positioning system based on switch indicator sign cooperative target. The knife switch indicator is one of the most important objects in the substation inspection. Usually the switch indicator lies in the middle or top of the substation equipment and conventional method can’t measure. This paper presents a substation label measurement method based on monocular vision. In this method, the label frame of knife switch indicator is as a cooperative target. The 3D coordinates and attitude of the label frame can be calculated in a camera coordinates system. At the same time, this paper introduces an extraction process of knife brake indicator label with multi-features under complex background. At last, we do simulation experiment for positioning technology of 3D virtual environment modeling system for substation switch indicator. Simulation results show that the method introduced in this paper can realize positioning of substation switch indicator label.

Volume: 24, Issue: 1

Design of a System to Generate a Four Quadrant Signal at High-Frequency

by Yichi Zhang, Hongbin Wang
Abstract

In order to research biological cells, a well-established physical method can be used, that is electrorotation. To achieve electrorotation system, a signal oscillator or a signal generator is always needed. The signal generator is used for generating signals with phase shift and transfers it to the electro chamber. The frequency range of the output signal is generally between 20 Hz to 100 MHz for the signal generator, but for high frequency range, like 100 MHz to 1 GHz, the signal generator is hard to control and the linear properties for the output signal is not good enough to do the electrorotation. So design a signal generator to generate a signal with high frequency range is indispensable, and this is good for researching the biological cells in high frequency environment. The project finished by doing research for some available signal generators, like Phase-Locked Loop (PLL) and Direct Digital Synthesis (DDS), and the final system with high frequency output signal has been designed after the research. The frequency range of the output signal is between 100 MHz to 1.35 GHz, and the phase shift is 90 degree for the four output signals. The system finally designed is based on analogue circuit, all of the system blocks are designed in Cadence virtuoso software and the CMOS technology is 0.35um. It will affect the big data collection, processing and storing the result of the formation of the entire process

Volume: 24, Issue: 1

Meteorological correction model of IBIS-L System in the Slope Deformation Monitoring

by Xiaoqing Zuo, Hongchu Yu, Chenbo Zi, Xiaokun Xu, Liqi Wang, Haibo Liu
Abstract

Micro deformation monitoring system (IBIS-L) using high frequency microwave as signal for transmission, is easily affected by meteorology. How to eliminate the meteorological influence effectively, and extract useful information from the big data becomes a key to monitor the slope deformation with high precision by the IBIS-L system. Evaluation of the optimum meteorological correction mode for Slope Deformation Monitoring to ensure the accuracy of measurement is considered. This objective was realized by model construction technology, which uses calculation formula of Microwave Refraction rate, and the radial distance from the target point to the IBIS-L system to estimate the irreal displacement by meteorological influence. In this paper we examine feasibility and accuracy of the meteorological correction model via experiment analysis. This experiment takes the Nuozhadu hydropower station slope monitoring for example. Firstly, the temperature, humidity, air pressure and other meteorological parameters were measured simultaneously with IBIS-L system monitoring. Secondly, the measured meteorological parameters were taken into calculation formula of Microwave Refraction rate. Thirdly, combined with the radial distance from the target point to the IBIS-L system, the meteorological correction model in using IBIS-L system for slope deformation monitoring was established.

Volume: 24, Issue: 1

A Cooperative Dynamic Cluster in Multitasking Mobile Networks

by SHUFANG XU, Dazhuan Xu, Yingchi Mao, Huibin Wang
Abstract

Cluster as a classical paradigm has been often used in a multitasking mobile network and evolved into a dynamic cluster, which is motivated by the tasks with dynamic scale and nodal locations. An important question is how to synthesize the power of multiply members in a dynamic cluster and enhance the overall performance of the network. The almost universal setting is that most clusters are based on the fully connected network and the same level of tasks. To solve these questions, it should evaluate and develop the cooperation between nodes in a dynamic cluster and further propose a corresponding cooperation model. In this paper, the theoretical analysis and the basic models of cluster and dynamic cluster are introduced at first. Then an improved cooperation model of a dynamic cluster is proposed in multitasking mobile networks, which is named, cooperative dynamic cluster. As the main elements of the proposed model, the network connectivity and task priorities are also considered and discussed. Finally, a cooperative message forwarding mechanism is established to settle the cluster, the scales of which should be bounded at a reasonable level in view of a time cost. Simulation results demonstrate the efficiency of our proposed cooperative model was higher than the general dynamic cluster and normal method without cluster, respectively about 14% and 66% higher.

Volume: 23, Issue: 4

Oral health promotion program for fostering self-management of the elderly living in communities

by Reiko Sakashita, Misao Hamada, Takuichi Sato, Yuki Abiko, Miho Takami
Abstract

Objectives: A program fostering self-management for the elderly was implemented and the effects of the program and their continuities were assessed. Methods: Subjects consisted of 19 males and 131 females (average age, 73.100A000B100A07.4; range, 6020139400A0years). The intervention program consisted of the collective experience learning and private consultation. The collective experience learnings included; (1) monitoring the oral condition and practicing oral self-care, (2) monitoring the oral function and practicing oral exercises, and (3) group discussion on continuing oral self-care. Outcomes were evaluated at the beginning and the end of the intervention program, and three months after the investigation by the scores in; (1) oral self-care (2) oral condition, i.e., decayed teeth, community periodontal index (CPI), deposits of plaque and dental calculus, (3) oral function such as RSST, oral diadochokinesis, (4) QOL (SF-8 v22122, and GOHAI), and (5) cognitive function (MMSE-J). Informed consent was obtained from all subjects, and this study was approved by the Research Ethics Committee of the University of Hyogo. Results and Discussion: At three months after intervention, 124 subjects continued participating and 88 subjects (71%) completed all data. On the oral self-care, subjects cleaned their teeth more often and longer than before (P lt 0.001). The use of dental floss and interdental brushing significantly increased in number (P lt 0.001), and 67 participants (54%) visited the dentist during the program. CPI and deposits of plaque were significantly reduced after intervention (P lt 0.001). The scores of oral function also significantly improved (P lt 0.00120130.05). The scores of QOL (physical health), oral QOL and cognitive function significantly improved (P lt 0.00120130.05). These results suggest that this program not only promotes oral self-care, resulting in good oral health conditions, but also improves cognitive functions of the elderly.

Volume: 23, Issue: 3

Availability modeling for multi-tier cloud environment

by G Nalinipriya, K G Maheswari, Balamurugan Balusamy, K Kotteswari, Arun Kumar Sangaiah
Abstract

Performance modeling forms an essential process for evaluation of cloud quality. Cloud performance appraisal process and methods widely differ from that of other proven performance related methodologies being used for domains like computer networks, distributed computing and operations systems. Multi-tier Cloud is a scalable system in which many services or tiers can be constructed for all types of applications. The quality of service of a Multi-tier cloud environment is closely associated with several factors like dependability, availability, reliability, security, perform-ability and each of the performance entities directly or indirectly influences the overall functioning of the cloud. There are many models to evaluate cloud performance and quality, but these traditional models are not efficient enough and consider only certain primary parameters in their evaluation. In this paper, a high-level performance analysis model is proposed that can predict the availability of a Multi-tier cloud environment. As Multi-tier cloud deals with multiple services according to their application, the prediction of availability is essential one for performance modeling. The various prominent parameters that influence the performance of the cloud system are also considered in the proposed model. The proposed algorithm is experimentally verified by means of SHARPE tool.

Volume: 23, Issue: 3

Broker based trust architecture for federated healthcare cloud system

by K. Mohan, M Aramudhan
Abstract

Cloud healthcare is the challenging, growing exponentially and heterogeneous technical environment involving various roles such as medical practitioners, pharmacists, patients and IT professionals to access the information through different technologies and types of devices. In this paper, federated healthcare cloud broker architecture is proposed where the independent health service providers can be integrated together to form a large healthcare federation, fulfill the requirements of users by sharing the available resources at different locations with affordable prices and provides Quality of Service (QoS) to all end users. Different trust mechanisms such as Policy based trust; SLA verification trust and reputation based trust are computed to ensure the security and privacy of the users, who accessing the services in the proposed architecture. Service Measurement Index (SMI) attributes suggested by the Cloud Service Index Measurement Consortium (CSIMC) used to calculate trust values of the health provider, based on the calculated trust value, the selection is proposed for the specific request that helps to improve the reliability, security and privacy. To resolve the strict treatment of the differentiated module, a new mechanism is suggested as Patient Turned Queuing Scheduling (PTQS) to resolve the possibility of starvation happening in the existing differentiated modules. Simulation results show that the performance of the proposed mechanism is better than the existing random provider and feedback based selection mechanism in the related works.

Volume: 23, Issue: 3

Detection of heart disorders using an advanced intelligent swarm algorithm

by Sara Moein, Rajasvaran Logeswaran, Mohammad Faizal bin Ahmad Fauzi
Abstract

Electrocardiogram (ECG) is a well-known diagnostic tool, which is applied by cardiologists to diagnose cardiac disorders. Despite the simple shape of the ECG, various informative measures are included in each recording, which causes complexity for cardiac specialists to recognize the heart problem. Recent studies have concentrated on designing automatic decision-making systems to assist physicians in ECG interpretation and detecting the disorders using ECG signals. This paper applies one optimization algorithm known as Kinetic Gas Molecule Optimization (KGMO) that is based on swarm behavior of gas molecules to train a feedforward neural network for classification of ECG signals. Five types of ECG signals are used in this work including normal, supraventricular, brunch bundle block, anterior myocardial infarction (Anterior MI), and interior myocardial infarction (Interior MI). The classification performance of the proposed KGMO neural network (KGMONN) was evaluated on the Physiobank database and compared against conventional algorithms. The obtained results show the proposed neural network outperformed the Particle Swarm Optimization (PSO) and back propagation (BP) neural networks, with the accuracy of 0.85 and a Mean Square Error (MSE) of less than 20% for the training and test sets. The swarm based KGMONN provides a successful approach for detection of heart disorders with efficient performance.

Volume: 23, Issue: 3

A smart home system based on embedded technology and face recognition technology

by Hong Ai, Tongliang Li
Abstract

A smart home module program is designed based on Linux, in the ARM11 embedded development environment, and by using the UP-Magic 6410 development board. A touch-screen graphical interface for human-computer interaction is designed using QT. The sensor module is integrated into the underlying QT program, and the actuator module program performs external calls via QT, thus a complete set of smart home terminal is constituted. The function of remote video monitoring and the function of displaying module statuses on webpage are realized through connecting video cameras to the development board as well as constructing GoAhead web server and Spcaserv video server. A face recognition program is designed using MATLAB, a database for storing the face information for different users is designed, and they are connected to the smart home terminal. Based on face recognition, the identity information for the current user and the desired temperature are displayed, and the speed of the DC motor is changed. This system can also realize sound and light alarm. Alarm for harmful gas can be raised under the help of smoke sensor. The photosensitive sound switch module is used for simulating the voice control device so as to facilitate the user2019s control of the DC motor switch. Infrared irradiation sensor is used for simulating the identity-card identification equipment, so as to limit the system users. Through the coordination of the above parts, a complete smart home system which is composed of a computer, embedded development environment, sensors, actuators and ancillary devices is finally established.

Volume: 23, Issue: 3

Performance Improvement of Hardware/Software Architecture for Real-Time Bio Application Using MPSoC

by Raveendran Arun Prasath, Parasuraman Ganesh Kumar, Erulappan Sakthivel
Abstract

In biomedical applications, the awareness in chip architectures with high performance has extended a lot of attention in market and research. A remarkable problematic situation for biomedical engineers is to monitor and analyze heart diseases, which is considered as the main reason or Electrocardiography (ECG), which plays vital role in heart medicines. Since examination of ECG meets computational tasks particularly in real time, in order to analyze the 12 lead signals in parallel with increase in sampling frequencies. Additional contest is the examination of large amounts of data to develop the times of recordings. Currently, doctors use 12-lead ECG paper information for monitoring of the eyeball, which could extremely harm analysis accuracy. In conventional work, the researchers introduced the multi-processor system-on-chip architectures to focus on the parallelization of the ECG evaluation kernel. Similar to that of conventional work in this work Hardware-Software (HW/SW) Multi-Processor System-on-Chip (MPSoC) is introduced in this proposed work. The conventional architecture is complex, which results in the performance degradation. The major focus of this method initiates its design methodology from the specifications of 12-lead ECG application to the ultimate HW/SW architecture. In this proposed work the computational complexity is very much reduced, which results the performance improvement is achieved.

Volume: 23, Issue: 2

The Effect of Neighborhood Selection on Collaborative Filtering and a Novel Hybrid Algorithm

by Musa Milli, Hasan Bulut
Abstract

Recommender systems are widely used in industry and are still active research areas in academia. For many businesses, they have become indispensable business tools. Producing accurate results for such systems is important for the operations of the businesses. For this reason, various algorithms and approaches have been developed for recommender systems to increase the prediction accuracy. Collaborative filtering is one of the most successful approaches. In collaborative filtering, in order to predict more accurately, it is recommended to determine user2019s active neighbors. k-nearest neighbor (k-NN) algorithm is one of the most widely used neighbor selection algorithms. However, k-NN algorithm uses a fixed k value that reduces the accuracy of the prediction. In this paper, we present two novel approaches to increase the prediction accuracy of recommender systems; k%-nearest neighbor (k%-NN) algorithm to determine the appropriate k value for a user and a hybrid algorithm that combines a collaborative filtering technique and content-based approach. Our test results demonstrate that k%-NN algorithm increases the average prediction accuracy compared to the traditional k-NN algorithm. Additionally, when the proposed hybrid algorithm is used with k%-NN, it produces more accurate results than the conventional collaborative filtering technique and content-based approach.

Volume: 23, Issue: 2

Class Attendance Management System Using NFC Mobile Devices

by Mohamed Mohandes
Abstract

Monitoring students2019 class attendance in any educational institution is an important process as it is directly linked to academic performance. Collecting the student attendance manually results in loss of precious time, and also delays in subsequent processing of the collected data. In order to help faculty members concentrate on teaching, a solution is proposed for automating, monitoring, and further processing attendance collection, This paper describes a prototype of Class Attendance Management System (CAMS) that has been developed and evaluated using an NFC enabled mobile device and an NFC (or RFID) tag/card. This system helps school/university faculty in taking attendance in a class using his/her mobile phone in a quick and simple way, thus saving precious time in a classroom. Faculty can monitor students2019 attendance throughout an academic term, issue warnings, and request withdrawal of a student due to poor attendance as per the policy of the institution. The application in the NFC enabled phone reads a student ID by simply tapping it against an NFC student ID card. The application depends on the NFC enabled Android devices to read the student campus card, and extract his ID number to be used as a student identifier in CAMS. The developed system has been evaluated at King Fahd University of Petroleum and Minerals, Saudi Arabia during two academic terms. Positive feedback has been obtained from faculty and administration.

Volume: 23, Issue: 2

A Data Mining Approach to the Analysis of a Catering Lean Service Project

by Wen-Tsao Pan, Yungho Leu, Wenzhong Zhu, Wei-Yuan Lin
Abstract

Applied quantile regression to explore different ways to improve the catering service so as to promote the customer2019s service satisfaction.A lean service project aims to reduce the cost of material, labor and time required in providing a service to a customer so as to promote the service satisfaction from the customer. This paper presents a data mining approach to analyze the effectiveness of a lean service project on a catering service provided by a university restaurant. We have designed three consecutive stages of service scenarios; each represents an improvement over its previous stage. In this study, we first applied the grey relational analysis to confirm the effectiveness of the lean service project. That is, stage two and three actually obtained higher service satisfaction from customers than their corresponding previous stages did. We have performed a quantile regression analysis to explore the effect of different factors on low and high quantiles of service satisfaction. The result of the quantile regression analysis provides different ways for the restaurant to improve its customer2019s service satisfaction. Finally, we have built several prediction models to forecast the service satisfaction (Poor or Good) of a service sample. The experimental result showed that among the eight prediction models, FOAGRNN is the best in terms of the sensitivity, specificity, AUC and Gini performance measures.

Volume: 23, Issue: 2

Detection of architectural distortion in mammograms using geometrical properties of thinned edge structures

by Rekha Lakshmanan, Shiji T.P., Suma Mariam Jacob, Thara Pratab, Chinchu Thomas, Vinu Thomas
Abstract

The proposed method detects the most commonly missed breast cancer symptom, Architectural Distortion. The basis of the method lies in the analysis of geometrical properties of abnormal patterns that correspond to Architectural Distortion in mammograms. Pre-processing methods are employed for the elimination of Pectoral Muscle (PM) region from the mammogram and to localize possible centers of Architectural Distortion. Regions that are candidates to contain centroids of Architectural Distortion are identified using a modification of the isotropic SUSAN filter. Edge features are computed in these regions using Phase Congruency, which are thinned using Gradient Magnitude Maximization. From these thinned edges, relevant edge structures are retained based on three geometric properties namely eccentricity to retain near linear structures, perpendicular distance from each such structure to the centroid of the edges and quadrant support membership of these edge structures. Features for classification are generated from these three properties; a feed-forward neural network, trained using a combination of backpropagation and a metaheuristic algorithm based on Cuckoo search, is employed for classifying the suspicious regions identified by the modified filter for Architectural Distortion, as normal or malignant. Experimental analyses were carried out on mammograms obtained from the standard databases MIAS and DDSM as well as on images obtained from Lakeshore Hospital in Kochi, India. The classification step yielded a sensitivity of 89%, 89.8.7% and 97.6% and specificity of 90.9, 85 and 96.7% on 60 images from MIAS, 100 images from DDSM database and 100 images from Lakeshore Hospital respectively

Volume: 23, Issue: 1

Study on cluster analysis characteristics and classification capabilities 2014 a case study of satisfaction regarding hotels and bed amp breakfasts of Chinese tourists in Taiwan

by Seng-Su Tsang, Wen-Cheng Wang, Hao-Hsiang Ku
Abstract

Cluster analysis is a multivariate statistical analysis method for the classification of samples based on the principle of 201Clike attracts like201D. It requires reasonable classification according to the characteristics in a reasonable manner, and without any mode for reference, in other words, classification is implemented without any prior knowledge. It has been applied in many aspects. In this paper, four cluster analysis methods are used to study the questionnaire data of Chinese tourists2019 satisfaction regarding Taiwan2019s hotels and Bed amp Breakfasts, (BampBs). First, this study applied principal component analysis in reducing questionnaire variables, and then gray relational analysis to assess the overall satisfaction performance. By sorting the overall satisfaction performance values, the performance values combined with the principle components were used as the testing sample data. Afterwards, the samples were categorized into three categories and four categories according to performance value. The four cluster analysis methods were used for clustering the principle components in order to observe their cluster performance and classification capabilities. The testing data testing results suggested that GK Cluster can obtain good cluster performance and good classification capabilities.

Volume: 23, Issue: 1

Improving efficiency of heterogeneous multi relational classification by choosing efficient classifiers using ratio of success rate and time

by Amit Thakkar, Y P Kosta
Abstract

Traditional data mining algorithms will not work efficiently for most of the real world applications where the data is stored in relational format. Useful patterns can certainly be extracted from multiple relations using an existing traditional learning algorithm of data mining, but it would involve a lot of complexity. So there is a need of a multi relational classification, which analyzes relational data and predicts unknown patterns automatically. Moreover the performances of existing relational classifiers are limited, because the existing algorithms are not able to use different classifiers based on characteristics of different relations. The goal of the proposed approach is to select appropriate classifiers based on characteristics of different relations in the relational database to improve the overall performance without affecting the running time. So multi criteria classifier selection function based on ratio of accuracy and running time is used to select the most efficient classifier using Meta Learning. In the proposed classifier selection function, accuracy is used as a measure of benefit and running time is used as a measure of cost and their ratio is taken to ensure that the efficient classifier is selected. The experimental results show that the performance of proposed relational classification is better in terms of efficiency when compared to all other existing algorithms available in the literature. We are able to achieve best results by selecting an efficient algorithm for every relation contributing in the relational classification.

Volume: 23, Issue: 1

Fuzzy matching of edge and curvature based features from range images for 3D face recognition

by Suranjan Ganguly, Debotosh Bhattacharjee, Mita Nasipuri
Abstract

Automatic human face recognition is already in research from some decades due to its application in different fields. But there is no unique technique that is very much worthwhile for robust automatic human face recognition, suitable for all possible situations. In this paper, a new technique is proposed, which is a holistic approach, and it is based on 2018one to all2019 comparison method. Along with the edge, four different types of curvatures are computed from face image profile to capture both the local features and surface features from 3D face image. Then, a new feature space, EC (Edge_Curvature) image, is generated for feature estimation during final recognition purpose. The similarities among intra-class members are carried out using fuzzy rule derived from the computed distance vectors by Hausdorff, distance that is used to match the probe images for the classification purpose automatically. For the validation of the algorithm, the algorithm is experimented on Frav3D and GavabDB databases with two sets of investigations. One is synthesized data-set, consisting of frontal range images (i.e. expression, illumination and neutral) and registered range face images. The other set is the original range face images. It does not include the registered faces. These investigations highlight the robustness of the proposed methodology. The success rates of acceptance of the probe images from two synthesized datasets are 98.87% for Frav3D and 87.20% from GavabDB. On the other hand, classification rate from original data-set for GavabDB is 79.78% and 91.69% for Frav3D.

Volume: 23, Issue: 1

Multi-AUV task assignment and path planning with ocean current based on biological inspired self-organizing map and velocity synthesis algorithm

by Xiang Cao, Daqi Zhu
Abstract

An integrated multiple autonomous underwater vehicles (multi-AUV) dynamic task assignment and path planning algorithm is proposed for three-dimensional underwater workspace with ocean current. The proposed algorithm in this paper combines biological inspired self-organizing map (BISOM) and a velocity synthesis algorithm (VS). The goal is to control a team of AUVs to visit all targets, while guaranteeing AUV2019s motion can offset the impact of ocean currents. First, the SOM neural network is developed to assign a team of AUVs to achieve multiple target locations in underwater environments. Then to avoid obstacle autonomously for each AUV to visit the corresponding target, the biological inspired neurodynamics model (BINM) is used to update weights of the winner of SOM, and realize AUVs path planning autonomously. Lastly, the velocity synthesis algorithm is applied to optimize a path for each AUV to visit the corresponding target in dynamic environment with the ocean current. To demonstrate the effectiveness of the proposed algorithm, simulation results are given in this paper. Undoubtedly, the proposed algorithm is capable of dealing with task assignment and path planning in different environment. The path of the AUV is not affected by the effects of ocean currents and there are no great changes.

Volume: 23, Issue: 1

Multi agent system-based dynamic trust calculation model and credit management mechanism of online trading

by W. J. Jiang, Y. S. Xu, H. Guo, C. C. Wang, L. M. Zhang
Abstract

Now all kinds of malicious acts appear in C2C online auctions, particularly the phenomenon of trust lack and credit fraud is very outstanding. Therefore, how to build an effective trust model has become a burning problem. Based on analyzing limitations of the existing online trust transaction mechanism, and according to characteristics (such as dynamic, innominate and suppositional) of online transaction trust problem, the article proposes a dynamic trust calculation model and reputation management mechanism of online trading based on multi-Agent system. The model consists of three parts. The first part is the trust of user domain, to put importance on the influence on current trust by recent credibility status, to motivate users to adopt an agreed cooperative strategy. The second part is the weighted average of reputation feedback score. The weighted part mainly considers the trust from the reputation feedback score person (the credibility of the feedback score), the value of the transaction (to prevent the 201Ccredit squeeze201D), temporal discounted (201Cguard against the fluctuations of the credibility201D ) and other factors. The third part is to give a weighting on the community contribution, according to the action taken by a user to the other members of the community in a time domain, to increase or decrease the user2019s trust to isolate the feedback submission of the credibility and punish the fraud. The paper builds the fraud limitation mechanism, which combines the prevention beforehand, coordination in the event and punishment afterwards. The mechanism makes the online transaction safe. Theoretic proof and experimental verification indicate the following three problems can be solved effectively: 1) solving the problem, which is difficult to prevent and is that speculative user accumulates the little trusts and squeeze on the large trading; 2) preventing members from cheating by false trading or personation; 3) reducing the arbitration workload of the online business platform.

Volume: 22, Issue: 4

An Integration Model of Semantic Annotation Based on Synergetic Neural Network

by Zhehuang Huang, Yidong Chen
Abstract

Correct and automatical semantic analysis has always been one of major goals in natural language understanding. However, due to the difficulties in deep semantic analysis, at present, the mainstream studies of semantic analysis are focused on semantic role labeling (SRL) and word sense disambiguation (WSD). Nowadays, these two issues are mostly considered as separate tasks. However, this approach ignores possible dependencies between them. In order to address the issue, an integrative semantic analysis model based on synergetic neural network (SNN) is proposed in this paper, which can easily express useful logic constraints between SRL and WSD. The semantic analysis process can be viewed as the competition process of semantic order parameters. The strongest order parameter will win by competition and desired semantic patterns will be recognized. There are three main innovations in this paper. First, an integrative semantic analysis model is proposed that jointly models word sense disambiguationand semantic role labeling. Second, integrative order parameter is reconstructed to reflect the relation among semantic patterns. Finally, integrative network parameters and integrative evolution equation are reconstructed, which can reflect the relationship of guiding and driving each other between word sense and semantic roles. The experiment results on OntoNotes 2.0 corpus shows the integrative method in this paper has a higher performance for semantic role labeling and word sense disambiguation, and provides a good practicability and a promising future for other natural language processing tasks.

Volume: 22, Issue: 3

Haptic Technology for Micro-robotic Cell Injection Training Systems2014A Review

by Syafizwan Faroque, Ben Horan, Husaini Adam, Mulyoto Pangestu, Matthew Joordens
Abstract

Currently, the micro-robotic cell injection procedure is performed manually by expert human bio-operators. In order to be proficient at the task, lengthy and expensive dedicated training is required. As such, effective specialized training systems for this procedure can prove highly beneficial. This paper presents a comprehensive review of haptic technology relevant to cell injection training and discusses the feasibility of developing such training systems, providing researchers with an inclusive resource enabling the application of the presented approaches, or extension and advancement of the work. A brief explanation of cell injection and the challenges associated with the procedure are first presented. Important skills, such as accuracy, trajectory, speed and applied force, which need to be mastered by the bio-operator in order to achieve successful injection, are then discussed. Then an overview of various types of haptic feedback, devices and approaches is presented. This is followed by discussion on the approaches to cell modeling Discussion of the application of haptics to skills training across various fields and haptically-enabled virtual training systems evaluation are then presented. Finally, given the findings of the review, this paper concludes that a haptically-enabled virtual cell injection training system is feasible and recommendations are made to developers of such systems.

Volume: 22, Issue: 3

A Review on Artificial Intelligence Methodologies for the Forecasting of Crude Oil Price

by Haruna Chiroma, Sameem Abdul-kareem, Ahmad Shukri Mohd Noor, Adamu Abubakar, Nader Sohrabi Safa, Liyana Shuib, Mukhtar Fatihu Hamza, Abdulsalam Yau Gital, Tutut Herawan
Abstract

When crude oil prices began to escalate in the 1970s, conventional methods were the predominant methods used in forecasting oil pricing. These methods can no longer be used to tackle the nonlinear, chaotic, non-stationary, volatile, and complex nature of crude oil prices, because of the methods2019 linearity. To address the methodological limitations, computational intelligence techniques and more recently, hybrid intelligent systems have been deployed. In this paper, we present an extensive review of the existing research that has been conducted on applications of computational intelligence algorithms to crude oil price forecasting. Analysis and synthesis of published research in this domain, limitations and strengths of existing studies are provided. This paper finds that conventional methods are still relevant in the domain of crude oil price forecasting and the integration of wavelet analysis and computational intelligence techniques is attracting unprecedented interest from scholars in the domain of crude oil price forecasting. We intend for researchers to use this review as a starting point for further advancement, as well as an exploration of other techniques that have received little or no attention from researchers. Energy demand and supply projection can effectively be tackled with accurate forecasting of crude oil price, which can create stability in the oil market.

Volume: 22, Issue: 3

An Improved Square-always Exponentiation Resistant to Side-channel Attacks on RSA Implementation

by Yongje Choi, Dooho Choi, Hoonjae Lee, Jaecheol Ha
Abstract

Many cryptographic algorithms embedded in security devices have been used to strengthen home- land defense capability and protect critical information from cyber attacks. The RSA cryptosystem with the naive implementation of an exponentiation may reveal a secret key by two types of side-channel attacks, namely passive leakage information analysis and active fault injection attacks. Recently, a square-always exponentiation algorithm in which the multiplication is traded for squarings has been proposed. This novel algorithm for RSA implementation is faster than other regularity-based countermeasures and is resistant to SPA (simple power analysis) and fault injection attacks. This paper shows that the right-to-left version of square-always exponentiation algorithm is vulnerable to several side-channel attacks, namely collision distance-based doubling, chosen-message CPA (collision power analysis), and horizontal CPA-based combined attacks. Furthermore, an improved right-to-left square-always algorithm adopting the additive message blinding method and the intermediate message update technique is proposed to defeat previous and proposed side-channel attacks. The proposed exponentiation algorithm can be employed for secure CRT-RSA (RSA based on the Chinese remainder theorem) implementation resistant to the Bellcore attack. The paper presents some experimental results for the proposed power analysis attacks using an evaluation board.

Volume: 22, Issue: 3

An Improved Evolutionary Algorithm for Reducing the Number of Function Evaluations

by Erik Cuevas, Eduardo Santuario, Daniel Zaldivar, Marco Perez-Cisneros
Abstract

Many engineering applications can be approached as optimization problems whose solution commonly involves the execution of computational expensive objective functions. Recently, Evolutionary Algorithms (EAs) are gaining popularity for solving complex problems that are encountered in many disciplines, delivering a more robust and effective way to locate global optima in comparison to classical optimization methods. However, applying EA2019s to real-world problems demands a large number of function evaluations before delivering a satisfying result. Under such circumstances, several EAs have been adapted to reduce the number of function evaluations by using alternative models to substitute the original objective function. Despite such approaches employ a reduced number of function evaluations, the use of alternative models seriously affects their original EA search capacities and their solution accuracy. Recently, a new evolutionary method called the Adaptive Population with Reduced Evaluations (APRE) has been proposed to solve several image processing problems. APRE reduces the number of function evaluations through the use of two mechanisms: (1) The dynamic adaptation of the population and (2) the incorporation of a fitness calculation strategy, which decides when it is feasible to calculate or only estimate new generated individuals. As a result, the approach can substantially reduce the number of function evaluations, yet preserving the good search capabilities of an evolutionary approach. In this paper, the performance of APRE as a global optimization algorithm is presented. In order to illustrate the proficiency and robustness of APRE, it has been compared to other approaches that have been previously conceived to reduce the number of function evaluations. The comparison examines several standard benchmark functions, which are commonly considered within the EA field. Conducted simulations have confirmed that the proposed method achieves the best balance over its counterparts, in terms of the number of function evaluations and the solution accuracy.

Volume: 22, Issue: 2

Dynamic Multiobjective Evolutionary Algorithm With Two Stages Evolution Operation

by Chun-an Liu, Huamin Jia
Abstract

Multiobjective optimization problems occur in many situations and aspects of the engineering optimization field. In reality, many of the multiobjective optimization problems are dynamic in nature, i.e. their Pareto fronts change with the time or environment parameter; these optimization problems most often are called dynamic multiobjective optimization problem (DMOP). The major problems in solving DMOP are how to track and predict the Pareto optimization solutions and how to get the uniformly distributed Pareto fronts, which change with the time parameter. In this paper, a new dynamic multi-objective optimization evolutionary algorithm with two stages evolution operation is proposed for solving the kind of dynamic multiobjective optimization problem in which the Pareto optimal solutions change with time parameter continuously and slowly. At the first stage, when the time parameter has been changed, we use a new core distribution estimation algorithm to generate the new evolution population in the next environment; at the second stage, when the environment of the optimization problem keeps unchanged, a new crossover operator and a mutation operator are used to search the Pareto optimal solutions in current environment. Moreover, three performance metric methods for DMOP based on the generation distance, the spacing and the error ratio are also given. The computer simulations are made on three dynamic multi-objective optimization problems, and the results indicate the proposed algorithm is effective for solving DMOP.

Volume: 21, Issue: 4

Leveraging The Data Gathering and Analysis Phases to Gain Situational Awareness

by Yaser Khamayseh, Wail Mardini, Hadeel Tbashate
Abstract

Situational Awareness (SA) is the process of collecting observations, understanding their meaning and projection of new possible updates. It is applied in several dynamic and complex environments like aviation, air traffic control, power plant operations, military command and control. Object classification in infrared images is a crucial component of smart surveillance systems. In this paper, a robust framework for person-vehicle classification is proposed, which classifies objects with 97.2% accuracy and successfully achieves its intended use in challenging real-world situations. Some of these challenges are: Random camera viewpoints and low resolution infrared images. The main concern of this framework is enhancing the preprocessing phase to achieve higher classification accuracy. This paper provides an improved classification process accompanied with interpolation process for extracting the concerned features (objects) in infrared images. This framework increases the classification accuracy and overcome some potential problems in the data analysis phase of Situational Awareness (SA).The proposed system is simulated using MATLAB environment and achieves accurate results by relying on powerful distinguishable features extraction, high productive preprocessing workflow, high quality interpolation and classification framework. Experimental results prove the effectiveness of the proposed framework. The obtained results are compared against the results obtained from other technique.

Volume: 21, Issue: 4

PSO based Automated Test Coverage Analysis of Event Driven Systems

by Abdul Rauf, Eisa al Eisa
Abstract

Graphical User Interface (GUI, pronounced sometimes as gooey as well) was first developed in 1981 and has become an essence for today0027s computing. A GUI contains graphical objects having certain distinct values which can be used to determine the state of the GUI at any time. Developing organizations always desire to thoroughly test the software to get maximum confidence about its quality, but this requires gigantic effort to test a GUI application due to complexity involve in such applications. This problem has led to automate GUI testing and different techniques have been proposed for automated GUI Testing. Event-flow graph is a fresh breach in the field of automated GUI testing. As control-flow graph, another GUI model represents all possible execution paths in a program; in the same way, event-flow model represents all promising progressions of events that can be executed on the GUI. Another challenging question in software testing is, how much testing is enough? There are few measures that can be used to provide guidance on the quality of an automatic test suite as development proceeds. Particle swarm optimization (PSO) algorithm searches for best possible test parameter combinations that are according to some predefined test criterion. Usually this test criterion corresponds a 201Ccoverage function201D that measures how much of the automatically generated optimization parameters satisfies the given test criterion. In this paper, we have tried to exploit event driven nature of GUI. Based on this nature, we have presented a GUI testing and coverage analysis technique based on PSO.

Volume: 21, Issue: 4

Different Algorithms for Detection of Malondialdehyde Content in Eggplant Leaves Stressed by Grey Mold Based on Hyperspectral Imaging Technique

by Chuanqi Xie, Hailong Wang, Yongni Shao, Yong He
Abstract

The feasibility of using hyperspectral imaging (HSI) technique to measure malondialdehyde (MDA) content in eggplant leaves stressed by grey mold was evaluated in this paper. Hyperspectral images of infected and healthy eggplant leaves were obtained in the spectral region of 380 to 103000A0nm, and their spectral reflectance of region of interest (ROI) was extracted by Environment for Visualizing Images (ENVI 4.7) software. Several pre-processing methods were adopted and partial least squares (PLS) models were established to estimate MDA content in eggplant leaves. In order to reduce high dimensionality of spectral data, competitive adaptive re-weighted sampling (CARS) and latent variables (LV) were carried out to identify the most effective wavebands. The result showed that PLS model based on baseline pre-processing had a good performance for prediction set. On the basis of the effective wavelengths suggested by CARS and LV, PLS and multiple linear regression (MLR) models were established, respectively. Among these models, LV-MLR performed best with the highest value of correlation coefficient (r) and lowest value of root mean square error of prediction (RMSEP) for prediction set. The overall results demonstrated the potentiality of HSI technique as an objective and non-destructive method to detect MDA content in eggplant leaves stressed by grey mold.

Volume: 21, Issue: 3

Analysis of Phylogenetic Relationships of Main Citrus Germplasms Based on Ftir Spectra of Petals

by Xunlan Li, Shilai Yi, Yongqiang Zheng, Shaolan He
Abstract

To develop a quick, accurate and reliable technique for studying phylogenetic relationship of Citrus, FTIR (Fourier transform infrared spectroscopy) technique was used. The petals spectra of eighteen varieties of citrus germplasms were investigated by FTIR. Pretreatment methods of raw spectra (2000201350000A0cm00A0221200A01) were composed of baseline correction, normalize and first derivative (Savitzky-Golay). We used One-way ANOVA (analysis of variance) and Tukey2019s HSD (honestly significant difference) to extract effective wave bands, where the spectral absorbance values of different citrus germplasms were significantly different. The results showed that 2000223C183100A0cm00A0221200A01, 1763223C159500A0cm00A0221200A01, 1517223C109000A0cm00A0221200A01, 1035223C102400A0cm00A0221200A01, 950223C93500A0cm00A0221200A01, 861223C78400A0cm00A0221200A01, 744223C72100A0cm00A0221200A01 and 653223C60800A0cm00A0221200A01 were the effective wave bands. HCA (hierarchical cluster analysis) was adopted to classify citrus germplasms based on the above eight effective wave bands. It was found that eighteen citrus varieties were classified into six subgroups. The results of classification and citrus phylogenetic relationships between six subgroups were consistent of results from Morphology, Biochemistry, Cytology and Molecular Biology. The overall results demonstrated that fourier transform infrared spectroscopy technique with One-way ANOVA and Tukey2019s HSD and hierarchical cluster analysis model were promising for the rapid, accurate and reliable classification for citrus as well as studying citrus phylogenetic relationship.

Volume: 21, Issue: 3

Diagnosis of CTV-Infected Leaves Using Hyperspectral Imaging

by Dongmei Guo, Rangjin Xie, Chun Qian, Fangyun Yang, Yan Zhou, Lie Deng
Abstract

Hyperspectral reflectance images of healthy and diseased leaves infected with different isolates of Citrus tristeza virus (CTV) including TRL514, CT30, CT32 and CT11A were collected in the visible and near-infrared region of 4002013100000A0nm. Average reflectance spectrum was generated from each hyperspectral image individually obtained from 60 healthy and 240 CTV-infected leaves. The spectra were transformed with 15-point Savitzky Golay second derivative. Then principal component analysis was performed on the transformed data in order to reduce the dimension of data. Comparative analysis was performed among supervised classification models, including back-propagation neural network (BPNN), linear discriminant analysis (LDA) and Mahalanobis distance (MD). When the second derivative spectra were analyzed, classifier models including BPNN, LDA and MD can discriminate the healthy and CTV-infected leaves with the highest classification accuracies of 100% in the spectral range of 4002013100000A0nm and 7602013100000A0nm. Nine optimal wavelengths (405, 424, 920, 947, 957, 972, 978, 980, and 99800A0nm) selected by stepwise regression resulted in 97.33% total classification accuracy for differentiation of healthy and CTV-infected leaves and showed great potential in CTV diagnosis. However, the overall classification accuracy of different CTV isolates infected leaves resulted in 70% based on the MD model using the selected optimal wavelengths. Further study is required to find out whether the method is suitable for CTV detection under field conditions.

Volume: 21, Issue: 3

Perception-Based Software Release Planning

by Mubarak Alrashoud, Abdolreza Abhari
Abstract

Release planning is a cornerstone of incremental software development. This paper proposes a novel framework that performs the prioritization aspect of the software release-planning process. The aim of this framework is to help software product managers to select the most promising requirements that will be implemented in the next release. Many variables affect release planning, including: The importance of requirements as perceived by the different stakeholders; decision weights of the stakeholders; the risk associated with each requirement as estimated by the development team; the effort needed to implement each requirement; the release size (the effort allocated to implement and deliver a software release); and the dependencies among requirements. We assume that there are no ambiguities in defining the dependencies among requirements. Also it is assumed that the estimation of the available effort is accurate. Because of human perception, such variables as importance, risk, and required effort have a high degree of imprecision and uncertainty. Therefore, the strength and practicality of the Fuzzy Inference System (FIS) is employed to manipulate uncertainty in these three factors. In order to reflect the disagreements among the stakeholders on the FIS engine, the polling method is used to define the parameters of the membership functions of the importance variable. The effectiveness of the proposed framework is compared to genetic algorithm approach, which is applied in many works in the literature. The results of this comparison show that the proposed FIS-based approach achieves higher degree of stakeholders0027 satisfaction than genetic algorithm-based approach.

Volume: 21, Issue: 2

Establishment of the Optimized Production Performance Detection Model with the Combination of GA and BPN

by Wen-Tsao Pan, Wen-Tsao Pan, Ching-pei Lin, Ming-Sheng Hu, Li-Li Lei
Abstract

As the activities to improve the process rationalization, optimized production assists the enterprises to improve the production and management processes, product quality, productivity and customer service efficiency. This study analyzed the data collected from the experiments made by a lean production simulation laboratory at a university in Taiwan, so as to investigate whether production optimization results of the enterprises can promote the overall performance of production and service. This study first compared the data envelopment analysis (DEA) with the experimental data, so as to evaluate whether the optimized production can improve the performance. It then analyzed main factors influencing the income with decision tree, and established the optimized production performance detection model respectively using three data mining technologies, namely the GABPN, BPN and decision tree. The analytic results showed that the output through optimized production does improve the overall performance of production and service. The main factors affecting the technical efficiency include the time consumed from serving all dishes on the table to leaving the table and the time consumed from leaving the table to paying the bill. Among these three data mining technologies, GABPN has the best detection ability.

Volume: 21, Issue: 1

Efficient Photo Image Retrieval System Based on Combination of Smart Sensing and Visual Descriptor

by Yong-Hwan Lee, Sang-Burm Rhee
Abstract

In this paper, we propose a novel efficient photo image retrieval method that automatically indexes for the searching of relevant images using a combination of geo-coded information and content-based visual features. A photo image is labeled with its GPS (Global Positioning System) coordinates at the moment of capture, and the label leads to generating a geo-spatial index with three elements of latitude, longitude and image view direction. Then, content-based visual features are extracted, and combined with the geo-spatial information for indexing and retrieving the photo images. For user0027s querying process, the proposed method adopts two steps as a progressive approach, filtering the relevant subset prior to using a content-based ranking function. To evaluate the performance of the proposed algorithm, we assess the simulation performance in terms of average precision and F-score using a natural photo collection. Comparing the proposed approach to search using visual feature alone, an improvement of 20.8% (61.6201340.8) was observed. The experimental results show that the proposed method exhibited a slight enhancement of around 7.2% (61.6201354.4) in retrieval effectiveness, compared to previous work. These results reveal that a combination of context and content analysis is markedly more efficient and meaningful than using only visual feature for image retrieval.

Volume: 21, Issue: 1

Estimating Leaf Nitrogen Concentration In Barley By Coupling Hyperspectral Measurements With Optimal Combination Principle

by Xingang Xu, Chunjiang Zhao, Xiaoyu Song, Xiaodong Yang, Guijun Yang
Abstract

Leaf nitrogen concentration (LNC), as a key indicator of nitrogen (N) status, can be used to evaluate N nutrient levels and improve fertilizer regulation in fields. Due to the non-destructive and quick detection, hyperspectral remote sensing with hundreds of very narrow bands plays an unique role in monitoring LNC in crop, but most of the current methods using hyperspectral techniques are still based on spectral univariate analyses, which often bring about the unstability of the models for LNC estimates. By introducing the optimal combination principle to conduct multivariate analyses and form the combination model, this paper proposes a new method with hyperspectral measurments to estimate LNC in barley. First, this study analyzed the relationships between LNC in barley and three types of spectral parameters including spectral position, area features, vegetation indices, and established the quantitative models of determining LNC with the key spectral variables, then using the optimal combination method with linear programming algorithm conducted multivariate analyses for accuracy improvements by calculating the optimal weights to construct the combination model of evaluating LNC. The results showed that most of the three types of spectral variables had significant correlations with LNC under confidence level of 1%, and the univariate models with the key spectral variables (such as Dr and (03BBrplus03BBb)/03BBy)) could well describe the dynamic pattern of LNC changes in barley with the determination coefficients (R2) of 0.620 and 0.622, and root mean square errors (RMSE) of 0.619 and 0.620, respectively, but by comparison the combination model with Dr and 03BBb/03BBy exhibited the better fitting with R2 of 0.702 and RMSE of 0.574. This analysis indicates that hyperspectral measurements displays good potential to effectively estimate LNC in barley, and the optimal combination (OC) method has the better adaptability and accuracy due to the optimal selection of spectral parameters responding LNC, and can be applied for reliable estimation of LNC. The preliminary results of coupling hyperspectral measurements with optimal combination principle to estimate LNC can also provide new ideas for hyperspectral monitoring of other biochemical constituents.

Volume: 20, Issue: 4

Winter Wheat Cropland Grain Protein Content Evaluation through Remote Sensing

by Xiaoyu Song, Jihua Wang, Guijun Yang, Haikuan Feng
Abstract

Grain protein content (GPC) is generally not uniform across cropland due to changes in landscape position, nutrient availability, soil chemical, physical properties, cropping history and soil type. It is necessary to determine the winter wheat GPC quality for different croplands in a collecting area in order to optimize the grading process. GPC quality evaluation refers not only the GPC value, but also the GPC uniformity across a cropland. The objective of this study was to develop a method to evaluate the GPC quality for different croplands through remote sensing technique. Three Landsat5 TM images were acquired on March 27, April 28 and May 30, 2008, corresponding to erecting stage, booting stage and grain filling stage of wheat. The wheat GPC was determined after harvest. Then multi linear regression (MLR) analysis with the enter method was calculated using the TM spectral parameters and the measured GPC data. The GPC MLR model was established based on multi-temporal spectral parameters. The accuracy of the model was R2003E0.521, RMSE00A0003C00A00.66%. The GPC mean value and standard deviation value for each cropland was calculated based on the ancillary cropland boundary data and the grain protein monitoring map. Winter wheat filed GPC quality was evaluated by the GPC mean value and GPC uniformity parameter - coefficients of variation (CV). The evaluation result indicated that the super or good level winter wheat croplands mainly lie in Tongzhou, Daxing and Shunyi County, while the middle or low GPC level croplands are mainly distributed on the Fangshang county. This study indicates that the remote sensing technique provides valuable opportunities to monitor and evaluate grain protein quality.

Volume: 20, Issue: 4

The Effect of Slab Track on Wheel/Rail Rolling Noise in High Speed Railway

by Xinwen Yang, Guangtian Shi
Abstract

The slab track of high speed railway has higher environmental noise emissions caused by a train running than the ballasted track. In order to predict and control wheel/rail rolling noise radiation due to roughness of wheel and rail running surface on slab track, a model is developed to calculate wheel/rail rolling noise of the slab track and analyze the influences of railway slab structure parameters on wheel/rail rolling noise. The results show that the rail noise occurs mainly in the middle and high frequency ranges of 50020132000Hz. The wheel noise is chiefly in the high frequency ranges of 160020134000Hz, and the track slab noise radiates in the frequency ranges of 1252013500Hz. Instantaneous noise pressure of the rail is first, the slab is second, the wheel is in between. The weights of the slab have more influence on emissions of the slab, but have little impact on noise emission of the wheel and rail. Below 500Hz, the greater weight of the slab is, the greater noise emission of the slab is, but above 500Hz, the result is the opposite. The CA mortar elastic modulus under the slab has little impact on noise emission of the wheel and rail, whereas has more influence on noise emissions of the slab. The greater elastic modulus of CA mortar under the slab is, the less noise emission of the slab is. The rubber pads installed under the slab have more influence on noise emissions of the slab than that of wheel and rail. All in all, the rubber pad under the slab is beneficial to wheel-rail noise reduction.

Volume: 20, Issue: 4

An Empirical Study of Skew-Insensitive Splitting Criteria and its Application in Traditional Chinese Medicine

by Chong Su, Shenggen Ju, Yiguang Liu, Zhonghua Yu
Abstract

Learning from imbalanced datasets is a challenging topic and plays an important role in data mining community. Traditional splitting criteria such as information gain are sensitive to class distribution. In order to overcome the weakness, Hellinger Distance Decision Trees (HDDT) is proposed by Cieslak and Chawla. Despite HDDT outperforms the traditional decision trees, however, there may be other skew-insensitive splitting criteria. In this paper, we propose some new skew-insensitive splitting criteria which can be used in the construction of decision trees and applied a comprehensive empirical evaluation framework testing against commonly used sampling and ensemble methods, considering performance across 58 datasets. Based on the experimental results, we demonstrate the superiority of these skew-insensitive decision trees on the datasets with high imbalanced level and competitive performance on the datasets with low imbalanced level and K-L divergence-based decision tree (KLDDT) is the most robust among these skew-insensitive decision trees in the presence of class imbalance, especially when combined with SMOTE. Thus, we recommend the use of KLDDT with SMOTE when learning from high imbalanced datasets. Finally, we used these skew-insensitive decision trees to build the diagnosis model of chronic obstructive pulmonary disease in traditional Chinese Medicine. The results show that KLDDT is the most effective method.

Volume: 20, Issue: 4

Analysis of Dynamic Behavior for Ballastless Track-Bridge with a Hybrid Method

by Wenjun Luo, Xiaoyan Lei
Abstract

A hybrid method combining finite element method (FEM) and statistical energy analysis (SEA) was recently presented for predicting the steady-state response of vibro-acoustic systems. Based on the structural characteristics of vehicle-CRTS II ballastless track-bridge system, the hybrid method of vehicle-track-bridge elements was presented. Using finite element method and Hamilton Theory, the coupled equation of vehicle-track-bridge can be established. Computational software is coded with Matlab. As an application, the vibration characteristics of the track-bridge system vertical profile assumed as random irregularity were calculated and the effect of random irregularity wavelength was analyzed. The computational results show that (1) The changes of the rail random irregularity wavelength have little influences on vertical displacement of track structure. (2) While it has very significant influences on vertical wheel/rail contact forces, and vertical acceleration amplitude of rail and slab, and the influences caused by rail short-wave random irregularity are greater, and it has influences on vertical acceleration amplitude of bridge. (3) The energy of rail vibration excitated by rail short-wave random irregularity is mainly distributed within the range of medium-high frequency and the maximum distribution in the range of the first natural frequency (123600A0Hz) of rail. While the energy of rail vibration excitated by rail medium-long wave random irregularity is mainly distributed within the range of medium-low frequency.

Volume: 20, Issue: 4

Multiple Models Fuzzy Control: A Redistributed Fixed Models Based Approach

by Nikolaos Sofianos, Yiannis Boutalis
Abstract

A new fuzzy control architecture for unknown nonlinear systems in the framework of multiple models control is proposed in this paper. The architecture incorporates different kinds of identification models and controllers offering enhanced overall performance. More specifically, the fixed models which are widely used in multiple models control are becoming more flexible and they end up to be semi-fixed models. When semi-fixed models are combined with a free adaptive model and a reinitialized adaptive model, the result is very promising and offers many advantages in comparison with former control methods. All these models are represented by using the Takagi-Sugeno (T-S) method which is very useful for representing unknown or partially unknown nonlinear systems. The identification T-S models define the control signal at every time instant by updating their own state feedback fuzzy controllers and using the certainty equivalence approach. A performance index and an appropriate switching rule are used to determine the T-S model that approximates the plant best and consequently to pick the best available controller at every time instant. The semi-fixed models are updated according to a rule which leads the models towards a direction that minimizes the performance index. The asymptotic stability of the system and the adaptive laws for the adaptive models are given by using Lyapunov stability theory. The effectiveness and the advantages of the proposed method over other methods are illustrated by computer simulations.

Volume: 20, Issue: 2

A Heuristic Field Navigation Approach for Autonomous Underwater Vehicles

by Hui Miao, Xiaodi Huang
Abstract

As an effective path planning approach, Potential Field Method has been widely used for Autonomous Underwater Vehicles (AUVs) in underwater probing projects. However, the complexity of the realistic environments (e.g. the three dimensional environments rather than two dimensional environments, and limitations of the sensors in AUVs) have limited most of the current potential field approaches, in which the approaches can only be adapted to theoretic environments such as 2D or static environments. A novel heuristic potential field approach (HPF) incorporating a heuristic obstacle avoidance method is proposed in this paper for AUVs path planning in three dimensional environments which have dynamic targets. The contributions of this paper are: (1) The approach is able to provide solutions for more realistic and difficult conditions (such as three dimensional unknown environments and dynamic targets) rather than hypothetic environments (flat 2D known static environments); (2) The approach results in less computation time while giving better trade-offs among simplicity, far-field accuracy, and computational cost. The performance of the HPF is compared with previous published Simulated Annealing (SA) and Genetic Algorithm (GA) based methods. They are analyzed in several environments. The performance of the heuristic potential field approach is demonstrated through case studies not only to be effective in obtaining the optimal solution but also to be more efficient in processing time for dynamic path planning.

Volume: 20, Issue: 1

Cost-Efficient Environmentally-Friendly Control of Micro-Grids Using Intelligent Decision-Making for Storage Energy Management

by Y. S. MANJILI, Rolando Vega, Prof Mo Jamshidi
Abstract

A smart decision-making framework based on genetic algorithms (GA) and fuzzy logic is proposed for control and energy management of micro-grids. Objectives are to meet the demand profile, minimize electricity consumption cost, and to modify air pollution under a dynamic electricity pricing policy. The energy demand in the micro-grid network is provided by distributed renewable energy generation (coupling solar and wind), battery storage and balancing power from the electric utility. The fuzzy intelligent approach allows the calculation of the energy exchange rate of the micro-grid storage unit as a function of time. Such exchange rate (or decision-making capability) is based on (1) the electrical energy price per kilowatt-hour (kWh), (2) local demand (load), (3) electricity generation rate of renewable resources (supply), and (4) air pollution measure, all of which are sampled at predefined rates. Then, a cost function is defined as the net dollar amount corresponding to electricity flow between micro-grid and the utility grid. To define the cost function one must consider the cost incurred by the owner of the micro-grid associated to its distribution losses, in addition to its demand and supply costs, in such a way that a positive cost translates to owner losses and a negative cost is a gain. Six likely scenarios were defined to consider different micro-grid configurations accounting for the conditions seen in micro-grids today and also the conditions to be seen in the future. GA is implemented as a heuristic (DNA-based) search algorithm to determine the sub-optimal settings of the fuzzy controller. The aforementioned net cost (which includes pricing, demand and supply measures) and air pollution measures are then compared in every scenario with the objective to identify best-practices for energy control and management of micro-grids. Performance of the proposed GA-fuzzy intelligent approach is illustrated by numerical examples, and the capabilities and flexibility of the proposed framework as a tool for solving intermittent multi-objective function problems are presented in detail. Micro-grid owners looking into adopting a smart decision-making tool for energy storage management may see an ROI between 5 and 10.

Volume: 19, Issue: 4

FDA Performance Analysis of Job Shop Schedule in Uniform Parallel Machines

by Jin Chen, Yufeng Deng, Xueming He
Abstract

The challenging problem of a non-homogeneous machine load planning is to distribute jobs to the different machines with the same ability at the operation for the minimum cost. Each operation can be performed by a set of machines with the same or different characters, and each machine can handle a job many times with different operations. A new heuristics method (FDA, Four Dimension Algorithm) for uniform parallel machine scheduling is proposed in this paper to tackle this load distribution problem. The FDA method is to select a sequence, a machine set and an operation such that the minimum evaluation of the defined indexes is achieved. These indexes consist of four independent parameters. Essentially, this method consists of three main iteratives in implementation: iterative for job numbers, iterative for the operation turns and iterative for the non-homogeneous machines. Each operation is associated with a set of evaluation indexes. The proposed algorithm has shown a significant improvement over Genetic Algorithm and Branch and Bound Algorithm. The key of this new method is partial indexes calculated in every cycle only. As a result, the complexity of this method is greatly improved from exponential to only polynomial (O(mnplus3m2n2) at most). The most noticeable innovation in this proposed method is the practical efficient algorithm capable of analyzing non-homogeneous machine and job relations while reducing complexity of computation. Of equal importance, a various examples and experiments are shown in detail.

Volume: 19, Issue: 4

Directional Weight Based Contourlet Transform Denoising Algorithm for Oct Image

by Fangmin Dong, Qing Guo, Shuifa Sun, Xuhong Ren, Liwen Wang, Shiyu Feng, Bruce Gao
Abstract

Optical Coherence Tomography (OCT) imaging system has been widely used in biomedical field. However, the speckle noise in the OCT image prevents the application of this technology. The validity of existing contourlet-based denoising methods has been demonstrated. In the contourlet transform, the directional information contained by spatial domain is reflected in the corresponding sub-bands, while the noise is evenly distributed to each sub-band, resulting in a big difference among the coefficients0027 distribution of sub-bands. The traditional algorithms do not take these features into account, and only use uniform threshold shrinkage function to each sub-band, which limits the denoising effect. In this paper, a novel direction statistics approach is proposed to build a directional weight model in the spatial domain based on image gradient information to represent the effective edge information of different sub-bands, and this weight is introduced into threshold function for denoising. The experiments prove the effectiveness of this method. The proposed denoising framework is applied in contourlet soft threshold and bivariate threshold denoising algorithms for a large number of OCT images, and the results of these experiments show that the proposed algorithm effectively reduces noise while preferably preserves edge information.

Volume: 19, Issue: 4

Crop Discrimination in Shandong Province Based on Phenology Analysis of Multi-year Time Series

by Qingyun Xu, Guijun Yang, Huiling long, Chongchang Wang
Abstract

Crop type identification plays an important role in extracting crop acreage, assessing crop growth and arable land productivity. In this study, the main crops (winter wheat, summer maize and cotton) of Shandong Province as research objects, and the SPOTlowbarVGT normalized difference vegetation index (NDVI) remote sensing datasets from 1999 to 2011 covering Shandong Province were acquired. The NDVI characteristic curves of typical features were extracted by combining the SPOTlowbarVGT NDVI time series datasets, the HJ-1B image and the phenological information. Moreover, the reasonable dynamic thresholds were settled, the non-cultivated land areas were removed and the crop patterns and the crop types were identified based on the annual NDVI variation and the phenological information of the typical features. The accuracy assessment was performed through the spatial contrast and quantitative description. The overall accuracy is 77.10% in the spatial accuracy assessment compared with standard land cover classification map, and the overall relative errors of winter wheat, summer maize and cotton are 25.52%, 25.97% and 7.11% in the quantitative accuracy assessment compared with the statistical datasets. The results of research show that it is feasible to identify the crop planting patterns and crop types using the proposed classification method by combining the SPOTlowbarVGT NDVI time series datasets with the phenological information.

Volume: 19, Issue: 4

Selection of Spectral Channels for Satellite Sensors in Monitoring Yellow Rust Disease of Winter Wheat

by Lin Yuan, Jingcheng Zhang, Chenwei Nie, Liguang Wei, Guijun Yang, Jihua Wang
Abstract

Remote sensing has great potential to serve as a useful means in crop disease detection at regional scale. With the emerging of remote sensing data on various spectral settings, it is important to choose appropriate data for disease mapping and detection based on the characteristics of the disease. The present study takes yellow rust in winter wheat as an example. Based on canopy hyperspectral measurements, the simulative multi-spectral data was calculated by spectral response function of ten satellite sensors that were selected on purpose. An independent t-test analysis was conducted to access the disease sensitivity for different bands and sensors. The results showed that the sensitivity to yellow rust varied among different sensors, with green, red and near infrared bands been identified as disease sensitive bands. Moreover, to further assess the potential for onboard data in disease detection, we compared the performance of most suitable multi-spectral vegetation index (MVI)-GNDVI and NDVI based on Quickbird band settings with a classic hyperspectral vegetation index (HVI) and PRI (photochemical reflectance index). The validation results of the linear regression models suggested that although the MVI based model produced lower accuracy (R200A0003D00A00.68 of GNDVI, and R200A0003D00A00.66 of NDVI) than the HVI based model (R200A0003D00A00.79 of PRI), it could still achieve acceptable accuracy in disease detecting. Therefore, the probability to use multi-spectral satellite data for yellow rust monitoring is illustrated in this study.

Volume: 19, Issue: 4

Recycling Plants Layout Design by Means of an Interactive Genetic Algorithm

by Laura Garcia-Hernandez, Antonio Arauzo-Azofra, Lorenzo Salas-Morera, Henri Pierreval, Emilio Corchado
Abstract

Facility Layout Design is known to be very important for attaining production efficiency because it directly influences manufacturing costs, lead times, work in process and productivity. Facility Layout problems have been addressed using several approaches. Unfortunately, these approaches only take into account quantitative criteria. However, there are qualitative preferences referred to the knowledge and experience of the designer, which should also be considered in facility layout design. These preferences can be subjective, not known in advance and changed during the design process, so that, it is difficult to include them using a classic optimization approach. For that reason, we propose the use of an Interactive Genetic Algorithm (IGA) for designing the layout of two real recycling plants taking into consideration subjective features from the designer. The designers knowledge guides the evolution of the algorithm evaluating facility layouts in each generation adjusting the search to his/her preferences. To avoid the fatigue of the designer, he/she evaluates only the most representative individuals of the population selected through a soft computing clustering method. The algorithm is applied on two real world waste recycling plant layout problems: a carton packs recycling plant and chopped plastic one. The results are compared with another method, proving that the new approach is able to capture the designer preferences in a reasonable number of iterations.

Volume: 19, Issue: 3

A collaborative scheme for boundary detection and tracking of continuous objects in WSNs

by Chauhdary Hussain, Myong-Soon Park, Ali Bashir, Sayed Shah, Jeongjoon Lee
Abstract

With rapid advancements in MEMS technologies, sensor networks have made possible a broad range of applications in real time. Object tracking and detection is one of the most prominent applications for wireless sensor networks. Individual object tracking and detection has been intensively discussed, such as tracking enemy vehicles and detecting illegal border crossings. The tracking and detection of continuous objects such as fire smoke, nuclear explosions and hazardous bio-chemical material diffusions pose new challenges because of the characteristics of such objects, i.e., expanding and shrinking in size, changing in shape, splitting into multiple objects or merging multiple objects into one over time. A continuous object covers a large area over the network and requires extensive communication to detect and track. Tracking continuous objects accurately and efficiently is challenging, whereas extensive communication consumes massive energy in the network, and handling energy in wireless sensor network (WSNs) is a key issue in prolonging the lifetime of a network.In this paper, we propose collaborative boundary detection and tracking of continuous objects in WSNs. The proposed scheme adopts local communication between sensor nodes for the purpose of finding boundary nodes; boundary nodes enhance energy efficiency. The scheme uses an interpolation algorithm to find the boundary points and to acquire a precise boundary shape for the continuous object. The proposed scheme not only improves the tracking accuracy but also decreases energy consumption by reducing the number of nodes that participate in tracking and minimizing the communications cost. Simulation results show a significant improvement over the existing solutions.

Volume: 19, Issue: 3

Cluster analysis of citrus genotypes using near-infrared spectroscopy

by Qiuhong Liao, Yanbo Huang, Shaolan He, Rangjin Xie, Qiang Lv, Shilai Yi, Yongqiang Zheng, Xi Tian, Lie Deng, Chun Qian
Abstract

There are many genotypes and varieties in the citrus family. Currently, citrus classification systems have significant divergences in varieties of species, and subgenus classification as well. In this study, near-infrared spectroscopy technique was used to acquire spectral information on the surface of citrus fruits. Cluster analysis was consequently conducted to identify citrus genotypes. Results indicated that the combination of 9-point moving average smoothing and multiplicative scattering correction was optimal for preprocessing spectral data. In the spectral range of 1,180–1,220nm, the cumulative reliability of the first two principal components were greater than 99.4%, and sweet oranges were clustered into an independent class. In 1,280–1,320nm, systematic clustering performed better than principal component clustering, and all other sour oranges, except Goutoucheng, were clustered into a single clade. With dimensions reduction, the cumulative reliability of first five principle components in full band of 1,000–2,350nm reached up to 99.1%. Using principal component cluster analysis, pomelo and loose-skin mandarin were clustered together; sweet and sour oranges were clearly separated. Pomelo being clustered with loose-skin mandarin, implies that they may have a hybrid origin; Jiaogan Mandarin, Daoxian yeju Mandarin, Goutoucheng sour oranges, and Zhuhongju sour tangerine were clustered with sweet orange, which implies old varieties may contain similar characteristic matters as sweet orange; Given that Jinlong lemon and Ranpour lime were clustered with sour orange, they were proved to originated from sour orange. The study indicates the great potential of spectral analysis for citrus genotype identification and classification.

Volume: 19, Issue: 3

A New Decision Support Model Based on Impact Factor Analysis for Optimized Scheduling of Tomato Harvesting

by Dan Wang, Hong Zhang, Guocai Yang
Abstract

In fructescence, abnormal weather has great effects on tomatoes economic benefits. For example, heavy rain will bring great losses. Generally speaking, picking fruits in advance can reduce the losses. However, the market value of fruit is closely related to the maturity. This indicates that a dedicated system supporting decisions that affect tomato quality is required. In this work, a novel model designed for the scheduling of the tomato harvest is proposed. What is more, a feasibility study is carried out to evaluate the functionality and viability of a potential integrated decision support system. Besides, we establish factors hierarchical relationships to analyse the inner relationship between the factors and the effect to the economic benefit. Meanwhile, we study tomatoes ripeness algorithm and the corresponding performance evaluation methods to provide harvest scheduling for farmers, especially those making harvest plans in advance to avoid abnormal weather. Preliminary results show that the use of a dedicated decision support model for the tomato harvest has the potential of significantly reducing the costs and improving the quality of the harvest. In addition, simulation test platform proves the effectiveness of the algorithm and historical data is used to verify the proposed models accuracy and feasibility. To a certain extent, our model can improve tomatoes economic benefit.

Volume: 19, Issue: 3

A comparative analysis of spectral vegetation indices to estimate crop leaf area index

by Yuanyuan Fu, Guijun Yang, Jihua Wang, Haikuan Feng
Abstract

Leaf area index (LAI) is a key variable to reflect crop growth status and forecast crop yield. Many spectral vegetation indices (SVIs) suffer the saturation effect which limits the usefulness of optical remote sensing for crop LAI retrieval. Besides, leaf chlorophyll concentration and soil background reflectance are also two main factors to influence crop LAI retrieval using SVIs. In order to make better use of SVIs for crop LAI retrieval, it is significant to evaluate the performances of SVIs under varying conditions. In this context, PROSPECT and SAILH models were used to simulate a wide range of crop canopy reflectance in an attempt to conduct a comparative analysis. The sensitivity function was introduced to investigate the sensitivity of SVIs over the range of LAI. This sensitivity function is capable of quantifying the detailed relationship between SVIs and LAI. It is different with the regression based statistical parameters, such as coefficient of determination and root mean square, can only evaluate the overall performances of SVIs. The experimental results indicated that (1) LAI3 was an appropriate demarcation point for comparative analyses of SVIs; (2) when LAI was no more than three, the variations of soil background had significant negative effects on SVIs. LAI Determining Index (LAIDI), Optimized Soil-adjusted Vegetation Index (OSVI) and Renormalized Difference Vegetation Index (RDVI) were relatively optimal choices for LAI retrieval; (3) when LAI was larger than three, leaf chlorophyll concentration played an important role in influencing the performances of SVIs. Enhanced Vegetation Index 2(EVI2), LAIDI, RDVI, Soil Adjusted Vegetation Index (SAVI), Modified Triangular Vegetation Index 2(MTVI2) and Modified Chlorophyll Absorption Ratio Index 2 (MCARI2) were less affected by leaf chlorophyll concentration and had better performances due to their higher sensitivity to LAI even when LAI reached seven. The analytical results could be used to guide the selection of optimal SVIs for crop LAI retrieval in different phenology periods.

Volume: 19, Issue: 3

Remote Sensing of the Seasonal Naked Croplands Using Series of Ndvi Images and Phenological Feature

by Zhengying Shan, Qingyun Xu
Abstract

Naked cropland elimination is an important part of Beijing Olympic ecological project. In this paper, Multi-temporal satellite data were used to monitor and position the naked croplands. Three Landsat TM images and two “Beijing-1” Small-Satellite images were selected to calculate NDVI series according to crop phenological calendars and investigated information of agricultural cropping structures in Beijing suburb. Based on the phenological spectral characteristics of main agricultural land use types, a classification scheme was proposed to extract the naked croplands. Considering the structural characteristic hierarchical classification and various demands of feature selection in different periods, decision tree algorithm and a stepwise masking technology were employed to extract typical crops in each season, and hence the naked croplands were left. Accuracy assessment of the naked croplands in winter and spring were performed with comparison of the monitoring areas with statistical data. The results show that the area of the naked croplands in winter and spring was 170368.1ha in Beijing. The areas of the top five districts (Yanqing, Shunyi, Daxing, Miyun and Tongxian) were 17933.3ha, taking a percent of 69.2% of that of Beijing. The areas of the naked cropland were 25719.6ha, 4485.4ha and 3325ha in summer, autumn and all the year round respectively. Experimental results demonstrated that our method would fast and simply monitor agricultural land use.

Volume: 19, Issue: 2

In-Field Recognition and Navigation Path Extraction For Pineapple Harvesting Robots

by Bin Li, Maohua Wang
Abstract

Fruit recognition and navigation path extraction are important issues for developing fruit harvesting robots. This manuscript presents a recent study on developing an algorithm for recognizing “on-the-go” pineapple fruits and the cultivation rows for a harvesting robotic system. In-field pineapple recognition can be difficult due to many overlapping leaves from neighbouring plants. As pineapple fruits (Ananas comosus) are normally located at top of the plant with a crowned by a compact tuft of young leaves, image processing algorithms were developed to recognize the crown to locate the corresponding pineapple fruit in this study. RGB (Red, Green, and Blue) images were firstly collected from top-view of pineapple trees in the field and transformed into HSI (Hue, Saturation and Intensity) colour model. Then, Features of pineapple crowns were extracted and used for developing a classification algorithm. After the pineapple crowns were recognized, locations of the crowns grown in one row were determined and linearly fitted into a line, which could be used for navigating the harvesting robots to conduct the harvest. To validate the above algorithms, 100 images were taken in a pineapple field under different environments in Guangdong province as a validation set. The results showed that pineapple recognition rate can reach 94% on clear sky day, which was much better than that on overcast sky day and the navigation path was well fitted.

Volume: 19, Issue: 1

Chaotic Differential Evolution Algorithm Based On Competitive Coevolution And Its Application To Dynamic Optimization Of Chemical Processes

by Xin Li, Chunping Hu, Xuefeng Yan
Abstract

A chaotic differential evolution algorithm based on competition coevolution is proposed to improve the performance of the differential evolution (DE) algorithm. In the proposed algorithm (named CO-CDE), at first the population is divided into several sub-populations, each sub-population evolves individually, using different differential schemes. At the end of the evolution each sub-population will have one individual with best fitness. After that all the individuals with best fitness compete with each other, at this time the fitness of one individual is defined as the number of times one individual is superior to others. Therefore one individual with best fitness is picked out and its information is shared by the whole population. To avoid premature convergence and raise the probability of escaping from local optima, a chaotic evolutionary operation based on chaotic variables is introduced into the algorithm and implemented to the whole population. The simulation experiment shows that the CO-CDE algorithm generally outperforms the original differential evolution algorithm for a suite of benchmark functions. Furthermore, the CO-CDE algorithm is applied to a dynamic optimization of chemical process. Experimental results have proved the proposed approach effective, statistically consistent, and promising.

Volume: 19, Issue: 1

Image Segmentation Method for Crop Nutrient Deficiency Based on Fuzzy C-Means Clustering Algorithm

by Jing Hu, Daoliang Li, Guifen Chen, Qingling Duan, Yeiqi Han
Abstract

As the fact that the emergence and development of crop nutrient deficiency has become more common nowadays, this research aims to find a method to segment and determine nutrient deficiency regions of crop images based on image processing technology. The experiment starts by obtaining 256 images of various crops such as oat, wheat, beet, maize, rye, potato, kidney been and sunflower with nutrient deficiency. Secondly all the experimental images are pre-processed by color transformation and enhancement to improve quality. Finally the nutrient deficiency diseased regions of crop images were segmented by fuzzy c-means clustering (FCM) algorithm based on fuzzy clustering algorithm. In the experimental course, color space of image was transformed from RGB to HSV and images were enhanced by use of median filter method, which not only remove the noise of the image, but also keep clear edge and efficiently highlight the disease regions. To test the accuracy of segmentation, other common algorithms such as threshold, edge detection and domain division were compared with FCM. Results showed that the FCM algorithm was the appropriate algorithm for segmentation of complexity and uncertainty images of crop disease. Applying fuzzy set theory in dividing the nutrient deficiency regions is the new point of the research, and this research has great practical significance in variable rate fertilization based on image processing technology.

Volume: 18, Issue: 8

The Application of Unmanned Aerial Vehicle Remote Sensing in Quickly Monitoring Crop Pests

by Jianwei Yue, Tianjie Lei, Changchun Li, Jiangqun Zhu
Abstract

It is particularly important for the development of agriculture to strengthen the agricultural pests monitoring, prevention and cure. The traditional methods of remote sensing (RS) for agricultural pests monitoring cannot meet the needs of agricultural development, because of their long time-consuming, high cost, and low accuracy. However, Unmanned Aerial Vehicle (UAV) remote sensing, as a new means of remote sensing, is introduced in this paper for agricultural monitoring. UAV has advantages of strong real-time, quickness, convenience, low cost, high accuracy and abundant data. With the great flexibility of UAV remote sensing, it is not only easy to focus on regional and long-term agricultural pests monitoring, but also feasible to provide scientific basis for crop pests control. Thus, it is better to meet the time needs of pests’ control. In the article, Baiyangdian agricultural area is chosen as the study area. It is discussed how to process UAV images rapidly and extract disease crop information, especially which improved scale invariant feature transform (SIFT) algorithm and object-oriented information extraction are used for image-processing. Fortunately, it has been received good results for the local crop pest control in terms of fact of treatment. With the advantages of UAV remote sensing, there is a broad application prospect in precision agriculture.

Volume: 18, Issue: 8

Analysis of Rice Growth Using Multi-Temporal Radarsat-2 Quad-Pol Sar Images

by Fan Wu, Bo Zhang, Hong Zhang, Chao Wang, Yixian Tang
Abstract

Three time series of quad-polarization RADARSAT-2 images have been acquired from transplanting to harvesting of rice crop. Ground truth data such as rice height and biomass etc. were measured during acquisition of RADARSAT-2 data in Hainan province, southern China. Among different observations, dry biomass and fresh biomass of rice crop have shown a temporal signature with a clear correlation with backscattering coefficient in HVVH polarization, whose correlation parameters are mainly larger than 0.8, while HH and VV polarizations show unfavorable correlation with crop dryfresh biomass, whose correlation parameters are lower than 0.6. Variations of scatter mechanism of rice crop from transplantation to maturity have been investigated based on Pauli decomposition and HalphaA-wishart classification. For Pauli decomposition, Pauli A component is the main backscatter of rice crop, which valued between 0.2 and 0.6 in the whole rice growth stage. Pauli B component ranks second, valued from 0.1 to 0.3. Pauli C component is low from 0.02 to 0.1. However, Pauli C component of rice crop shows the best correlation with days after transplantation. The experiment and analysis results show the quad-polarization RADARSAT-2 SAR data have great potential for monitoring rice growth. Furthermore, when rice crops are in reproductive or ripening stage, the SAR data can obtain good results for rice mapping. With the information of biomass and mapping area of rice, rice yield estimation can be made.

Volume: 18, Issue: 8

Multi-Scale Model Of Dam Safety Condition Monitoring Based On Dynamic Bayesian Networks

by Fang Weihua, Xu Lanyu
Abstract

In order to monitor dam safety condition better, a Dynamic Bayesian Networks (DBN) model is developed to overcome the shortcomings of the ordinary monitoring methods in this paper. Ordinary methods include comprehensive assessment methods and numerical simulation methods. Comprehensive assessment methods have shortcomings such as weight detemunation, scale difference, variables correlation, etc. In addition, comprehensive assessment methods cannot describe the multi-scale characteristics of monitoring data and dynamic property of large dam. Numerical simulation methods need complex mathematical theory, mechanics methodologies and high performance computer. DBN is a novel model with the consideration of correlations, delay and multi-scale characteristics of the variables such as deformation, seepage, stress and water load and temperature loads, as well as duration at every state. And the new model is also a simpler method with less experience, computational complexity and fewer experiments comparing with numerical simulation. Case study shows that better effect has been achieved in dam safety condition monitoring because the model can take the specific properties of dam into account. The main novel technical contributions of this paper are as follows: applying DBN to establish amulti-scale dynamic model on large engineering conditions monitoring at the first time; providing a new concept of latent durational-state time and analyzing its meaning of dam in the physical sense; using mechanics simulation analysis to verify the new model.

Volume: 18, Issue: 7

Prototype of an Aquacultural Information System Based on Internet of Things E-Nose

by Daokun Ma, Qisheng Ding, Zhenbo Li, Daoliang Li, Yaoguang Wei
Abstract

Aquaculture is the fastest growing food-producing sector in the World, especially, in P. R. China. In the ongoing process of Internet growth, a new development is on its way, namely the evolution from a network of interconnected computers to a network of interconnected objects that is called the Internet of Things (IoT). Constructing an affordable, easy-to-use, aquaculture information system based on IoT is the future trend for modern aquaculture. The prototype IoT of aquaculture information system is developed with following functions: (1) Sense in real time the water quality in aquaculture ponds and send water quality data in time to remote IoT Platform with wireless sensor network and mobile internet; (2) Forecast the change trend of water quality based real-time data and operating suggestion will be created for special aquaculture application automatically; (3) Real-time and efficient exchange with special users and common users with WEB, WAP and SMS; (4) Some control devices may be operated automatically with authorization. Five control nodes, twelve water quality sensor nodes and a wireless weather sensor node are deployed in seven aquaculture ponds belongs to five farmers, in Yixing Peng-yao eco-agricultural demonstration farm located in Wuxi, Jiangsu province, P. R. China. Each farmer may access web pages of aquaculture service platform to monitor the variation of water quality of their ponds, and control their own aerators via internet or cell phone. At the same time, real time water quality and weather information will be sent to related farmers as the public service. The real time monitoring water quality and weather data are reliable, aquaculture information service has been accepted by farmers and local government, which revealed that the prototype IoT of aquaculture information system is meaningful.

Volume: 18, Issue: 5

Monitoring Winter Wheat Maturity By Hyperspectral Vegetation Indices

by Qian Wang, Cunjun Li, Jihua Wang, Yuanfang Huang, Xiaoyu Song, Wenjiang Huang
Abstract

It is very important to harvest wheat in optimum time which greatly affects grain quality, mainly referred to as protein content in the research. Because either too early harvest shortens grain-filling process or too late harvest leads to yield losses and poor quality caused by high grain respiration rate in dry and hot wind weather and sprouting in rainy weather. Research was conducted during 2007–2008 to determine if vegetation indices could be used as indicators of winter wheat maturation. The cultivar Jingdong 12 was planted under four nitrogen treatments, and reflectance and agronomy parameters were measured on five different harvest dates. In maturation process, increasing grain protein content ranged from 12.2 to 16.5, declining ear water content ranged from 36 to 60, chlorophyll, carotenoids content of both leaf and ear decreased, and ratio of carotenoids to chlorophyll increased on the whole. Seven maturation monitoring models were established by corresponding vegetation indices, which were chosen by comparing correlation coefficients between vegetation indices and agronomy parameters. Compared with the other models, the eaz water content model was chosen as the best one due to the least average absolute relative error and high prediction accuracy in validation, with 0.03, 0.04 in cross test and 0.98, 0.98 in training samplings test. The results suggest that hyperspectral vegetation indices could potentially aid in predicting winter wheat maturation.

Volume: 18, Issue: 5

Nondestructive Evaluation of Welding Crack Defects in Structural Component Of Track Crane Using Acoustic Emission Technique

by Wei Wang, Hongxing Wei, Yu Zhen, Yantao Dou, Huiming Heng
Abstract

On-line defects detection of structural component of track crane such as crane boom, outriggers, turntable and vehicle frame, is a difficult problem of nondestructive testing (NDT). In the present study, the usefulness of acoustic emission (AE) measurements for the detection of welding crack defects in the specimen made of HG70 steel, which is widely used for manufacturing structural component of track crane, has been investigated. Firstly, three-point bending test on the standard specimen made of HG70 steel is conducted. Then the analysis of AE source location and AE source characteristics are introduced respectively. As the result, the rudimentary location analysis can be attained using the linear location method. For AE source characteristics, procedural map of amplitude, RMS or energy rate, energy accumulation map, in connection with the corresponding conditional filter means, are available analysis methods to mirror welding crack defect. We fmd that the maximum of the amplitude, energy rate and RMS appear approximately at the same time, and the time is the point at which energy accumulation has a remarkable jump as shown in the energy accumulation map. In the end, we deduce that the reason why there is such a sudden energy leap is because of the activities of welding crack, and the multistage energy release also because of the step by step propagation of welding crack.

Volume: 18, Issue: 5

Multi-Connect Architecture (MCA) Associative Memory: A Modified Hopfield Neural Network

by Emad I Kareem, Wafaa A.H Alsalihy, Aman Jantan
Abstract

Although Hopfield neural network is one of the most commonly used neural network models for auto-association and optimization tasks, it has several limitations. For example, it is well known that Hopfield neural networks has limited stored patterns, local minimum problems, limited noise ratio, retrieve reverse value of pattern, and shifting and scaling problems. This research will propose multi-connect architecture (MCA) associative memory to improve the Hopfield neural network by modifying the net architecture, learning and convergence processes. This modification is to increase the performance of associative memory neural network by avoiding most of the Hopfield neural network limitations. In general, MCA is a single layer neural network uses auto-association tasks and working in two phases, that is learning and convergence phases. MCA was developed based on two principles. First, the smallest net size will be used rather than depending on the pattern size. Second, the learning process will be performed to the limited parts of the pattern only to avoid learning similar parts several times. The experiments performed show promising results when MCA shows high efficiency associative memory by avoiding most of the Hopfield net limitations. The results proved that the MCA net can learn and recognize unlimited patterns in varying size with acceptable percentage noise rate in comparison to the traditional Hopfield neural network.

Volume: 18, Issue: 3

Negative Dielectrophoretic Particle Positioning in A Fluidic Flow

by Tomoyuki Yasukawa, Junko Yamada, Hitoshi Shiku, Fumio Mizutani, Tomokazu Matsue
Abstract

In this work, we report the control of a microparticle position within fluid flow based on its size by using a repulsive force generated with negative dielectrophoresis (n-DEP). The n-DEP based fluidic channel, which was consisted of navigator and separator electrodes, was used to manipulate the particle flow in the center of channel and to control the particle position in the fluidic flow. The mixture of 10 μm-and 20 μm-diameter particles was introduced into the channel with 30 μm height at 700 μms. On applying an AC voltage (23 V peak-peak and 7 MHz) to the navigator electrodes on the upper and lower substrates in a n-DEP frequency region, the suspended microparticles were guided to the center of the fluidic channel and then channelled through the passage gate positioned at the center of the channel. The AC electric field was also applied to separator electrodes, resulting in a formation of flow paths with low electric fields. The separator was consisted of the five band electrodes with the different gap spaces with the adjacent band, which allow to fomvng the flow paths with different electric fields. The microparticles separately flow in line along the paths formed between the band electrodes, the 10 μm-diameter particles mainly flow through the narrow path and 20 μm-diameter particles through the wide path arranged at the outside from the center. These results indicated that positions of two types of microparticles in the fluidic channel were easily separated and controlled using the n-DEP.The present procedure therefore yields a procedure for the DEP based simple and miniaturized separators.

Volume: 18, Issue: 2

Soft Computation Using Artificial Neural Estimation And Linear Matrix Inequality Transmutation For Controlling Singularly-Perturbed Closed Timeindependent Quantum Computation Systems, Part B: Hierarchical Regulation Implementation

by Anas Al-Rabadi
Abstract

A new method of intelligent control for time-independent closed quantum computation systems is introduced in this second part of the paper. The goal is to apply a new implementation of intelligent hierarchical control method within quantum computing systems where the obtained results are satisfying for the robust control of time-independent quantum computations. The new method utilizes supervised recurrent artificial neural networks (ANN) to estimate parameters of the [ A ]transformed system matrix After system matrix estimation is performed, linear matrix inequality (LMi) is used to detemvne the permutation matrix [P] so that a complete system transmutation {[ B ], [ C ], [ D ]} is accomplished. The transformed system model is then reduced using singular perturbation and state feedback control is implemented for system performance enhancement. In quantum computing and mechanics, a closed system is an isolated system that can’t exchange energy or matter with its surroundings and doesn’t interact with other quantum systems. In contrast to open quantum systems, closed quantum systems obey the unitary evolution and thus are information lossless. The experimental simulations were implemented upon the time-independent closed quantum computing system using the important quantum case of a particle in afinite-walled box for an m-valued quantum computing in which the resulting distinct energy states are used as the orthonormal basis states. Although several other diverse conventional control methodologies and schemes can exist for the purpose of controlling computational circuits and systems, the introduced intelligent hierarchical control method simplifies the order of the ANN-estimated LMI-transformed eigenvalue-preserved quantum model, and thereafter synthesizes - as demonstrated -simpler controllers for the utilized closed time-independent quantum computation devices, circuits and systems, that achieve the desired enhanced quantum system performance.

Volume: 18, Issue: 1