Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      15 January 2008, Volume 23 Issue 1   
    For Selected: View Abstracts Toggle Thumbnails
    Preface
    Preface
    Xiao-Dong Li, Wen-Jian Luo, and Xin Yao
    Journal of Data Acquisition and Processing, 2008, 23 (1): 1-. 
    Abstract   PDF(34KB) ( 845 )  
    Articles
    Orthogonal Methods Based Ant Colony Search for Solving Continuous Optimization Problems
    Xiao-Min Hu, Jun Zhang, and Yun Li
    Journal of Data Acquisition and Processing, 2008, 23 (1): 2-0. 
    Abstract   PDF(828KB) ( 9493 )  
    Research into ant colony algorithms for solving continuous optimization problems forms one of the most significant and promising areas in swarm computation. Although traditional ant algorithms are designed for combinatorial optimization, they have shown great potential in solving a wide range of optimization problems, including continuous optimization. Aimed at solving continuous problems effectively, this paper develops a novel ant algorithm termed ``continuous orthogonal ant colony'' (COAC), whose pheromone deposit mechanisms would enable ants to search for solutions collaboratively and effectively. By using the orthogonal design method, ants in the feasible domain can explore their chosen regions rapidly and efficiently. By implementing an ``adaptive regional radius'' method, the proposed algorithm can reduce the probability of being trapped in local optima and therefore enhance the global search capability and accuracy. An elitist strategy is also employed to reserve the most valuable points. The performance of the COAC is compared with two other ant algorithms for continuous optimization --- API and CACO by testing seventeen functions in the continuous domain. The results demonstrate that the proposed COAC algorithm outperforms the others.
    Analyzing the Simple Ranking and Selection Process for Constrained Evolutionary Optimization
    Ehab Z. Elfeky, Ruhul A. Sarker, and Daryl L. Essam
    Journal of Data Acquisition and Processing, 2008, 23 (1): 19-34 . 
    Abstract   PDF(3650KB) ( 6318 )  
    Many optimization problems that involve practical applications have functional constraints, and some of these constraints are active, meaning that they prevent any solution from improving the objective function value to the one that is better than any solution lying beyond the constraint limits. Therefore, the optimal solution usually lies on the boundary of the feasible region. In order to converge faster when solving such problems, a new ranking and selection scheme is introduced which exploits this feature of constrained problems. In conjunction with selection, a new crossover method is also presented based on three parents. When comparing the results of this new algorithm with six other evolutionary based methods, using 12 benchmark problems from the literature, it shows very encouraging performance. T-tests have been applied in this research to show if there is any statistically significance differences between the algorithms. A study has also been carried out in order to show the effect of each component of the proposed algorithm.
    Histogram-Based Estimation of Distribution Algorithm: A Competent Method for Continuous Optimization
    Nan Ding, Shu-De Zhou, and Zeng-Qi Sun
    Journal of Data Acquisition and Processing, 2008, 23 (1): 35-43 . 
    Abstract   PDF(384KB) ( 8557 )  
    Designing efficient estimation of distribution algorithms for optimizing complex continuous problems is still a challenging task. This paper utilizes histogram probabilistic model to describe the distribution of population and to generate promising solutions. The advantage of histogram model, its intrinsic multimodality, makes it proper to describe the solution distribution of complex and multimodal continuous problems. To make histogram model more efficiently explore and exploit the search space, several strategies are brought into the algorithms: the surrounding effect reduces the population size in estimating the model with a certain number of the bins and the shrinking strategy guarantees the accuracy of optimal solutions. Furthermore, this paper shows that histogram-based EDA (Estimation of distribution algorithm) can give comparable or even much better performance than those predominant EDAs based on Gaussian models.
    Interleaving Guidance in Evolutionary Multi-Objective Optimization
    Lam Thu Bui, Kalyanmoy Deb, Hussein A. Abbass, and Daryl Essam
    Journal of Data Acquisition and Processing, 2008, 23 (1): 44-63 . 
    Abstract   PDF(2481KB) ( 5933 )  
    In this paper, we propose a framework that uses localization for multi-objective optimization to simultaneously guide an evolutionary algorithm in both the decision and objective spaces. The localization is built using a limited number of adaptive spheres (local models) in the decision space. These spheres are usually guided, using some direction information, in the decision space towards the areas with non-dominated solutions. We use a second mechanism to adjust the spheres to specialize on different parts of the Pareto front by using a guided dominance technique in the objective space. Through this interleaved guidance in both spaces, the spheres will be guided towards different parts of the Pareto front while also exploring the decision space efficiently. The experimental results showed good performance for the local models using this dual guidance, in comparison with their original version.
    Spatially-Structured Sharing Technique for Multimodal Problems
    Grant Dick and Peter Whigham
    Journal of Data Acquisition and Processing, 2008, 23 (1): 64-76 . 
    Abstract   PDF(1139KB) ( 4880 )  
    Spatially-structured populations are one approach to increasing genetic diversity in an evolutionary algorithm (EA). However, they are susceptible to convergence to a single peak in a multimodal fitness landscape. Niching methods, such as fitness sharing, allow an EA to maintain multiple solutions in a single population, however they have rarely been used in conjunction with spatially-structured populations. This paper introduces {\it local sharing}, a method that applies sharing to the overlapping demes of a spatially-structured population. The combination of these two methods succeeds in maintaining multiple solutions in problems that have previously proved difficult for sharing alone (and vice-versa).
    Mining Frequent Generalized Itemsets and Generalized Association Rules Without Redundancy
    Daniel Kunkle, Donghui Zhang, and Gene Cooperman
    Journal of Data Acquisition and Processing, 2008, 23 (1): 77-02 . 
    Abstract   PDF(1057KB) ( 5876 )  
    This paper presents some new algorithms to efficiently mine max frequent generalized itemsets (g-itemsets) and essential generalized association rules (g-rules). These are compact and general representations for all frequent patterns and all strong association rules in the generalized environment. Our results fill an important gap among algorithms for frequent patterns and association rules by combining two concepts. First, generalized itemsets employ a taxonomy of items, rather than a flat list of items. This produces more natural frequent itemsets and associations such as ({\em meat}, milk) instead of (beef, milk), (chicken, milk), etc. Second, compact representations of frequent itemsets and strong rules, whose result size is exponentially smaller, can solve a standard dilemma in mining patterns: with small threshold values for support and confidence, the user is overwhelmed by the extraordinary number of identified patterns and associations; but with large threshold values, some interesting patterns and associations fail to be identified. Our algorithms can also expand those max frequent g-itemsets and essential g-rules into the much larger set of ordinary frequent g-itemsets and strong g-rules. While that expansion is not recommended in most practical cases, we do so in order to present a comparison with existing algorithms that only handle ordinary frequent g-itemsets. In this case, the new algorithm is shown to be thousands, and in some cases millions, of the time faster than previous algorithms. Further, the new algorithm succeeds in analyzing deeper taxonomies, with the depths of seven or more. Experimental results for previous algorithms limited themselves to taxonomies with depth at most three or four. In each of the two problems, a straightforward lattice-based approach is briefly discussed and then a classification-based algorithm is developed. In particular, the two classification-based algorithms are {MFGI\_class} for mining max frequent g-itemsets and {EGR\_class} for mining essential g-rules. The classification-based algorithms are featured with conceptual classification trees and dynamic generation and pruning algorithms.
    Efficient Optimization of Multiple Subspace Skyline Queries
    Zhen-Hua Huang, Jian-Kui Guo, Sheng-Li Sun, and Wei Wang
    Journal of Data Acquisition and Processing, 2008, 23 (1): 103-111 . 
    Abstract   PDF(546KB) ( 7187 )  
    We present the first efficient sound and complete algorithm (i.e., AOMSSQ) for optimizing multiple subspace skyline queries simultaneously in this paper. We first identify three performance problems of the na\'\i ve approach (i.e., SUBSKY) which can be used in processing arbitrary single-subspace skyline query. Then we propose a cell-dominance computation algorithm (i.e., CDCA) to efficiently overcome the drawbacks of SUBSKY. Specially, a novel pruning technique is used in CDCA to dramatically decrease the query time. Finally, based on the CDCA algorithm and the share mechanism between subspaces, we present and discuss the AOMSSQ algorithm and prove it sound and complete. We also present detailed theoretical analyses and extensive experiments that demonstrate our algorithms are both efficient and effective.
    Clustering Text Data Streams
    Yu-Bao Liu, Jia-Rong Cai, Jian Yin, and Ada Wai-Chee Fu
    Journal of Data Acquisition and Processing, 2008, 23 (1): 112-128 . 
    Abstract   PDF(1181KB) ( 12594 )  
    Clustering text data streams is an important issue in data mining community and has a number of applications such as news group filtering, text crawling, document organization and topic detection and tracing etc. However, most methods are similarity-based approaches and only use the TF$*$IDF scheme to represent the semantics of text data and often lead to poor clustering quality. Recently, researchers argue that semantic smoothing model is more efficient than the existing TF$*$IDF scheme for improving text clustering quality. However, the existing semantic smoothing model is not suitable for dynamic text data context. In this paper, we extend the semantic smoothing model into text data streams context firstly. Based on the extended model, we then present two online clustering algorithms OCTS and OCTSM for the clustering of massive text data streams. In both algorithms, we also present a new cluster statistics structure named cluster profile which can capture the semantics of text data streams dynamically and at the same time speed up the clustering process. Some efficient implementations for our algorithms are also given. Finally, we present a series of experimental results illustrating the effectiveness of our technique.
    Characterizing Economic and Social Properties of Trust and Reputation Systems in P2P Environment
    Yu-Feng Wang, Yoshiaki Hori, and Kouichi Sakurai
    Journal of Data Acquisition and Processing, 2008, 23 (1): 129-140 . 
    Abstract   PDF(1233KB) ( 5959 )  
    Considering the fact that P2P (Peer-to-Peer) systems are self-organized and autonomous, social-control mechanism (like trust and reputation) is essential to evaluate the trustworthiness of participating peers and to combat the selfish, dishonest and malicious peer behaviors. So, naturally, we advocate that P2P systems that gradually act as an important information infrastructure should be multi-disciplinary research topic, and reflect certain features of our society. So, from economic and social perspective, this paper designs the incentive-compatible reputation feedback scheme based on well-known economic model, and characterizes the social features of trust network in terms of efficiency and cost. Specifically, our framework has two distinctive purposes: first, from high-level perspective, we argue trust system is a special kind of social network, and an accurate characterization of the structural properties of the network can be of fundamental importance to understand the dynamics of the system. Thus, inspired by the concept of weighted small-world, this paper proposes new measurements to characterize the social properties of trust system, that is, high global and local efficiency, and low cost; then, from relative low-level perspective, we argue that reputation feedback is a special kind of information, and it is not free. So, based on economic model, VCG (Vickrey-Clarke-Grove)-like reputation remuneration mechanism is proposed to stimulate rational peers not only to provide reputation feedback, but truthfully offer feedback. Furthermore, considering that trust and reputation is subjective, we classify the trust into functional trust and referral trust, and extend the referral trust to include two factors: similarity and truthfulness, which can efficiently reduce the trust inference error. The preliminary simulation results show the benefits of our proposal and the emergence of certain social properties in trust network.
    CASA: A New IFU Architecture for Power-Efficient Instruction Cache and TLB Designs
    Han-Xin Sun, Kun-Peng Yang, Yu-Lai Zhao, Dong Tong, and Xu Cheng
    Journal of Data Acquisition and Processing, 2008, 23 (1): 141-153 . 
    Abstract   PDF(2317KB) ( 9969 )  
    The instruction fetch unit (IFU) usually dissipates a considerable portion of total chip power. In traditional IFU architectures, as soon as the fetch address is generated, it needs to be sent to the instruction cache and TLB arrays for instruction fetch. Since limited work can be done by the power-saving logic after the fetch address generation and before the instruction fetch, previous power-saving approaches usually suffer from the unnecessary restrictions from traditional IFU architectures. In this paper, we present CASA, a new power-aware IFU architecture, which effectively reduces the unnecessary restrictions on the power-saving approaches and provides sufficient time and information for the power-saving logic of both instruction cache and TLB. By analyzing, recording, and utilizing the key information of the dynamic instruction flow early in the front-end pipeline, CASA brings the opportunity to maximize the power efficiency and minimize the performance overhead. Compared to the baseline configuration, the leakage and dynamic power of instruction cache is reduced by 89.7\% and 64.1\% respectively, and the dynamic power of instruction TLB is reduced by 90.2\%. Meanwhile the performance degradation in the worst case is only 0.63\%. Compared to previous state-of-the-art power-saving approaches, the CASA-based approach saves IFU power more effectively, incurs less performance overhead and achieves better scalability. It is promising that CASA can stimulate further work on architectural solutions to power-efficient IFU designs.
    Double Barrier Coverage in Dense Sensor Networks
    Cheng-Dong Jiang and Guo-Liang Chen
    Journal of Data Acquisition and Processing, 2008, 23 (1): 154-ver . 
    Abstract   PDF(3143KB) ( 6429 )  
    When a sensor network is deployed to detect objects penetrating a protected region, it is not necessary to have every point in the deployment region covered by a sensor. It is enough if the penetrating objects are detected at some point in their trajectory. If a sensor network guarantees that every penetrating object will be detected by two distinct sensors at the same time somewhere in this area, we say {that} the network provides double barrier coverage (DBC). In {this paper}, we propose a new planar structure of Sparse Delaunay Triangulation (SparseDT), and prove some elaborate attributes of it. We develop theoretical foundations for double barrier coverage, and propose efficient algorithms with NS2 simulator using which one can activate the necessary sensors to guarantee double barrier coverage while the other sensors go to sleep. The upper and lower bounds of number of active nodes are determined, and we show that high-speed target will be detected efficiently with this configuration.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved