Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      15 January 2006, Volume 21 Issue 1   
    For Selected: View Abstracts Toggle Thumbnails
    Articles
    Recent Advances in Evolutionary Computation
    Xin Yao, and Yong Xu
    Journal of Data Acquisition and Processing, 2006, 21 (1): 1-0. 
    Abstract   PDF(476KB) ( 1663 )  
    Evolutionary computation has experienced a tremendous growth in the last decade in both theoretical analyses and industrial applications. Its scope has evolved beyond its original meaning of ological evolution'' toward a wide variety of nature inspired computational algorithms and techniques, including evolutionary, neural, ecological, social and economical computation, etc., in a unified framework. Many research topics in evolutionary computation nowadays are not necessarily "evolutionary". This paper provides an overview of some recent advances in evolutionary computation that have been made in CERCIA at the University of Birmingham, UK. It covers a wide range of topics in optimization, learning and design using evolutionary approaches and techniques, and theoretical results in the computational time complexity of evolutionary algorithms. Some issues related to future development of evolutionary computation are also discussed.
    P-Tree Structures and Event Horizon: Efficient Event-Set Implementations
    Katerina Asdre and Stavros D. Nikolopoulos
    Journal of Data Acquisition and Processing, 2006, 21 (1): 19-26 . 
    Abstract   PDF(1734KB) ( 1335 )  
    This paper describes efficient data structures, namely the Indexed P-tree, Block P-tree, and Indexed-Block P-tree (or P-tree, BP-tree, and IBP-tree, respectively, for short), for maintaining future events in a general purpose discrete event simulation system, and studies the performance of their event set algorithms under the event horizon principle. For comparison reasons, some well-known event set algorithms have been selected and studied, that is, the Dynamic-heap and the P-tree algorithms. To gain insight into the performance of the proposed event set algorithms and allow comparisons with the other selected algorithms, they are tested under a wide variety of conditions in an experimental way. The time needed for the execution of the Hold operation is taken as the measure for estimating the average time complexity of the algorithms. The experimental results show that the BP -tree algorithm and the IBP -tree algorithm behave very well with the event set of all the sizes and their performance is almost independent of the stochastic distributions.
    An Improved Algorithm for Finding the Closest Pair of Points
    Qi Ge, Hai-Tao Wang, and Hong Zhu
    Journal of Data Acquisition and Processing, 2006, 21 (1): 27-31 . 
    Abstract   PDF(1523KB) ( 1646 )  
    As early as in 1975, Shamos and Hoey first gave an O(n\lg n) -time divide-and-conquer algorithm (SH algorithm in short) for the problem of finding the closest pair of points. In one process of combination, the Euclidean distances between 3n pairs of points need to be computed, so the overall complexity of computing distance is then 3n\lg n . Since the computation of distance is more costly compared with other basic operation, how to improve SH algorithm from the aspect of complexity of computing distance is considered. In 1998, Zhou, Xiong and Zhu improved SH algorithm by reducing this complexity to 2n\lg n . In this paper, we make further improvement. The overall complexity of computing distances is reduced to (3n\lg n)/2 , which is only half that of SH algorithm.
    Revisiting the Meaning of Requirements
    Zhi Jin
    Journal of Data Acquisition and Processing, 2006, 21 (1): 32-40 . 
    Abstract   PDF(2168KB) ( 1132 )  
    Understanding the meaning of requirements can help elicit the real world requirements and refine their specifications. But what do the requirements of a desired software mean is not a well-explained question yet though there are many software development methods available. This paper suggests that the meaning of requirements could be depicted by the will-to-be environments of the desired software, and the optative interactions of the software with its environments as well as the causal relationships among these interactions. This paper also emphasizes the necessity of distinguishing the external manifestation from the internal structure of each system component during the process of requirements decomposition and refinement. Several decomposition strategies have been given to support the continuous decomposition. The external manifestation and the internal structure of the system component have been defined. The roles of the knowledge about the environments have been explicitly described. A simple but meaningful example embedded in the paper illustrates the main ideas as well as how to conduct the requirements decomposition and refinement by using these proposed strategies.
    Remove Irrelevant Atomic Formulas for Timed Automaton Model Checking
    Jian-Hua Zhao, Xuan-Dong Li, Tao Zheng, and Guo-Liang Zheng
    Journal of Data Acquisition and Processing, 2006, 21 (1): 41-51 . 
    Abstract   PDF(655KB) ( 1303 )  
    Most of the timed automata reachability analysis algorithms in the literature explore the state spaces by enumeration of symbolic states, which use time constraints to represent a set of concrete states. A time constraint is a conjunction of atomic formulas which bound the differences of clock values. In this paper, it is shown that some atomic formulas of symbolic states generated by the algorithms can be removed to improve the model checking time- and space-efficiency. Such atomic formulas are called as irrelevant atomic formulas. A method is also presented to detect irrelevant formulas based on the test-reset information about clock variables. An optimized model-checking algorithm is designed based on these techniques. The case studies show that the techniques presented in this paper significantly improve the space- and time-efficiency of reachability analysis.
    Efficient Incremental Maintenance for Distributive and Non-Distributive Aggregate Functions
    Cui-Ping Li and Shan Wang
    Journal of Data Acquisition and Processing, 2006, 21 (1): 52-65 . 
    Abstract   PDF(3588KB) ( 1557 )  
    Data cube pre-computation is an important concept for supporting OLAP (Online Analytical Processing) and has been studied extensively. It is often not feasible to compute a complete data cube due to the huge storage requirement. Recently proposed quotient cube addressed this issue through a partitioning method that groups cube cells into equivalence partitions. Such an approach not only is useful for distributive aggregate functions such as SUM but also can be applied to the maintenance of holistic aggregate functions like MEDIAN which will require the storage of a set of tuples for each equivalence class. Unfortunately, as changes are made to the data sources, maintaining the quotient cube is non-trivial since the partitioning of the cube cells must also be updated. In this paper, the authors design incremental algorithms to update a quotient cube efficiently for both SUM and MEDIAN aggregate functions. For the aggregate function SUM, concepts are borrowed from the principle of Galois Lattice to develop CPU-efficient algorithms to update a quotient cube. For the aggregate function MEDIAN, the concept of a pseudo class is introduced to further reduce the size of the quotient cube. Coupled with a novel sliding window technique, an efficient algorithm is developed for maintaining a MEDIAN quotient cube that takes up reasonably small storage space. Performance study shows that the proposed algorithms are efficient and scalable over large databases.
    A Workflow Process Mining Algorithm Based on Synchro-Net
    Xing-Qi Huang, Li-Fu Wang, Wen Zhao, Shi-Kun Zhang, and Chong-Yi Yuan
    Journal of Data Acquisition and Processing, 2006, 21 (1): 66-71 . 
    Abstract   PDF(949KB) ( 1462 )  
    Sometimes historic information about workflow execution is needed to analyze business processes. Process mining aims at extracting information from event logs for capturing a business process in execution. In this paper a process mining algorithm is proposed based on Synchro-Net which is a synchronization-based model of workflow logic and workflow semantics. With this mining algorithm based on the model, problems such as invisible tasks and short-loops can be dealt with at ease. A process mining example is presented to illustrate the algorithm, and the evaluation is also given.
    TCP Issues in Mobile Ad Hoc Networks: Challenges and Solutions
    Wei-Qiang Xu and Tie-Jun Wu
    Journal of Data Acquisition and Processing, 2006, 21 (1): 72-81 . 
    Abstract   PDF(367KB) ( 6117 )  
    Mobile ad hoc networks (MANETs) are a kind of very complex distributed communication systems with wireless mobile nodes that can be freely and dynamically self-organized into arbitrary and temporary network topologies. MANETs inherit several limitations of wireless networks, meanwhile make new challenges arising from the specificity of MANETs, such as route failures, hidden terminals and exposed terminals. When TCP is applied in a MANET environment, a number of tough problems have to be dealt with. In this paper, a comprehensive survey on this dynamic field is given. Specifically, for the first time all factors impairing TCP performance are identified based on network protocol hierarchy, i.e., lossy wireless channel at the physical layer; excessive contention and unfair access at the MAC layer; frail routing protocol at the network layer, the MAC layer and the network layer related mobile node; unfit congestion window size at the transport layer and the transport layer related asymmetric path. How these factors degrade TCP performance is clearly explained. Then, based on how to alleviate the impact of each of these factors listed above, the existing solutions are collected as comprehensively as possible and classified into a number of categories, and their advantages and limitations are discussed. Based on the limitations of these solutions, a set of open problems for designing more robust solutions is suggested.
    Pseudorandomness of Camellia-Like Scheme
    Wen-Ling Wu
    Journal of Data Acquisition and Processing, 2006, 21 (1): 82-88 . 
    Abstract   PDF(613KB) ( 1541 )  
    Luby and Rackoff idealized DES by replacing each round function with one large random function. In this paper, the author idealizes Camellia by replacing each S-box with one small random function, which is named Camellia-like scheme. It is then proved that five-round Camellia-like scheme is pseudorandom and eight-round Camellia-like scheme is super-pseudorandom for adaptive adversaries. Further the paper considers more efficient construction of Camellia-like scheme, and discusses how to construct pseudorandom Camellia-like scheme from less random functions.
    A Near-Optimal Optimization Algorithm for Link Assignment in Wireless Ad-Hoc Networks
    Heng-Chang Liu and Bao-Hua Zhao
    Journal of Data Acquisition and Processing, 2006, 21 (1): 89-94 . 
    Abstract   PDF(327KB) ( 1122 )  
    Over the past few years, wireless networking technologies have made vast forays in our daily lives. In wireless ad-hoc networks, links are set up by a number of units without any permanent infrastructures. In this paper, the resource optimization is considered to maximize the network throughput by efficiently using the network capacity, where multi-hop functionality and spatial TDMA (STDMA) access scheme are used. The objective is to find the minimum frame length with given traffic distributions and corresponding routing information. Because of the complex structure of the underlying mathematical problem, previous work and analysis become intractable for networks of realistic sizes. The problem is addressed through mathematical programming approach, the linear integer formulation is developed for optimizing the network throughput, and then the similarity between the original problem and the graph edge coloring problem is shown through the conflict graph concept. A column generation solution is proposed and several enhancements are made in order to fasten its convergence. Numerical results demonstrate that the theoretical limit of the throughput can be efficiently computed for networks of realistic sizes.
    Communication Between Speech Production and Perception Within the Brain---Observation and Simulation
    Jianwu Dang, Masato Akagi, and Kiyoshi Honda
    Journal of Data Acquisition and Processing, 2006, 21 (1): 95-05 . 
    Abstract   PDF(2081KB) ( 1363 )  
    Realization of an intelligent human-machine interface requires us to investigate human mechanisms and learn from them. This study focuses on communication between speech production and perception within human brain and realizing it in an artificial system. A physiological research study based on electromyographic signals (Honda, 1996) suggested that speech communication in human brain might be based on a topological mapping between speech production and perception, according to an analogous topology between motor and sensory representations. Following this hypothesis, this study first investigated the topologies of the vowel system across the motor, kinematic, and acoustic spaces by means of a model simulation, and then examined the linkage between vowel production and perception in terms of a transformed auditory feedback (TAF) experiment. The model simulation indicated that there exists an invariant mapping from muscle activations (motor space) to articulations (kinematic space) via a coordinate consisting of force-dependent equilibrium positions, and the mapping from the motor space to kinematic space is unique. The motor-kinematic-acoustic deduction in the model simulation showed that the topologies were compatible from one space to another. In the TAF experiment, vowel production exhibited a compensatory response for a perturbation in the feedback sound. This implied that vowel production is controlled in reference to perception monitoring.
    A Dialectal Chinese Speech Recognition Framework
    Jing Li, Thomas Fang Zheng, William Byrne, and Dan Jurafsky
    Journal of Data Acquisition and Processing, 2006, 21 (1): 106-115 . 
    Abstract   PDF(3404KB) ( 1893 )  
    A framework for dialectal Chinese speech recognition is proposed and studied, in which a relatively small dialectal Chinese (or in other words Chinese influenced by the native dialect) speech corpus and dialect-related knowledge are adopted to transform a standard Chinese (or Putonghua, abbreviated as PTH) speech recognizer into a dialectal Chinese speech recognizer. Two kinds of knowledge sources are explored: one is expert knowledge and the other is a small dialectal Chinese corpus. These knowledge sources provide information at four levels: phonetic level, lexicon level, language level, and acoustic decoder level. This paper takes Wu dialectal Chinese (WDC) as an example target language. The goal is to establish a WDC speech recognizer from an existing PTH speech recognizer based on the Initial-Final structure of the Chinese language and a study of how dialectal Chinese speakers speak Putonghua. The authors propose to use context-independent PTH-IF mappings (where IF means either a Chinese Initial or a Chinese Final), context-independent WDC-IF mappings, and syllable-dependent WDC-IF mappings (obtained from either experts or data), and combine them with the supervised maximum likelihood linear regression (MLLR) acoustic model adaptation method. To reduce the size of the multi-pronunciation lexicon introduced by the IF mappings, which might also enlarge the lexicon confusion and hence lead to the performance degradation, a Multi-Pronunciation Expansion (MPE) method based on the accumulated uni-gram probability (AUP) is proposed. In addition, some commonly used WDC words are selected and added to the lexicon. Compared with the original PTH speech recognizer, the resulting WDC speech recognizer achieves 10--18% absolute Character Error Rate (CER) reduction when recognizing WDC, with only a 0.62% CER increase when recognizing PTH. The proposed framework and methods are expected to work not only for Wu dialectal Chinese but also for other dialectal Chinese languages and even other languages.
    Image Region Selection and Ensemble for Face Recognition
    Xin Geng and Zhi-Hua Zhou
    Journal of Data Acquisition and Processing, 2006, 21 (1): 116-125 . 
    Abstract   PDF(2892KB) ( 1714 )  
    In this paper, a novel framework for face recognition, namely Selective Ensemble of Image Regions (SEIR), is proposed. In this framework, all possible regions in the face image are regarded as a certain kind of features. There are two main steps in SEIR: the first step is to automatically select several regions from all possible candidates; the second step is to construct classifier ensemble from the selected regions. An implementation of SEIR based on multiple eigenspaces, namely SEME, is also proposed in this paper. SEME is analyzed and compared with eigenface, PCA + LDA, eigenfeature, and eigenface + eigenfeature through experiments. The experimental results show that SEME achieves the best performance.
    Quaternion Diffusion for Color Image Filtering
    Zhong-Xuan Liu, Shi-Guo Lian, and Zhen Ren
    Journal of Data Acquisition and Processing, 2006, 21 (1): 126-136 . 
    Abstract   PDF(481KB) ( 1710 )  
    How to combine color and multiscale information is a fundamental question for computer vision, and quite a few color diffusion techniques have been presented. Most of these proposed techniques do not consider the direct interactions between color channel pairs. In this paper, a new method of color diffusion considering these effects is presented, which is based on quaternion diffusion (QD) equation. In addition to showing the solution to linear QD and its analysis, one form of nonlinear QD is discussed. Compared with other color diffusion techniques, considering the interactions between channel pairs, QD has the following advantages: 1) staircasing effect is avoided; 2) as diffusion tensor, the image derivative is regularized without requiring additional convolution; 3) less time is needed. Experimental results demonstrate the effectiveness of linear and nonlinear QD applied to natural color images for denoising by both visual and quantitative evaluations.
    Novel Cluster Validity Index for FCM Algorithm
    Jian Yu and Cui-Xia Li
    Journal of Data Acquisition and Processing, 2006, 21 (1): 137-140 . 
    Abstract   PDF(372KB) ( 1221 )  
    How to determine an appropriate number of clusters is very important when implementing a specific clustering algorithm, like c-means, fuzzy c-means (FCM). In the literature, most cluster validity indices are originated from partition or geometrical property of the data set. In this paper, the authors developed a novel cluster validity index for FCM, based on the optimality test of FCM. Unlike the previous cluster validity indices, this novel cluster validity index is inherent in FCM itself. Comparison experiments show that the stability index can be used as cluster validity index for the fuzzy c-means.
    A Test Approach for Look-Up Table Based FPGAs
    Ehsan Atoofian and Zainalabedin Navabi
    Journal of Data Acquisition and Processing, 2006, 21 (1): 141-146 . 
    Abstract   PDF(2531KB) ( 1668 )  
    This paper describes a test architecture for minimum number of test configurations in test of FPGA (Field Programmable Gate Array) LUTs (Look Up Tables). The test architecture includes a TPG (Test Pattern Generator) that is tested while it is generating test data for LEs (Logic Elements) that form the CUT (Circuit Under Test). This scheme eliminates the need for switching LEs between CUT, TPG and ORA (Output Response Analyzer) and having to perform many more reconfigurations of the FPGA. An external ORA locates faults of the FPGA under test. In addition to the LUTs, a scheme is presented for testing other parts of LEs. Compared with other methods, the presented scheme uses the least number of reconfigurations of an FPGA for its LUT testing.
    ACO-Steiner: Ant Colony Optimization Based Rectilinear Steiner Minimal Tree Algorithm
    Yu Hu, Tong Jing, Zhe Feng, Xian-Long Hong, Xiao-Dong Hu, and Gui-Ying Yan
    Journal of Data Acquisition and Processing, 2006, 21 (1): 147-153 . 
    Abstract   PDF(1646KB) ( 1910 )  
    The rectilinear Steiner minimal tree (RSMT) problem is one of the fundamental problems in physical design, especially in routing, which is known to be NP-complete. This paper presents an algorithm, called ACO-Steiner, for RSMT construction based on ant colony optimization (ACO). An RSMT is constructed with ants' movements in Hanan grid, and then the constraint of Hanan grid is broken to accelerate ants' movements to improve the performance of the algorithm. This algorithm has been implemented on a Sun workstation with Unix operating system and the results have been compared with the fastest exact RSMT algorithm, GeoSteiner 3.1 and a recent heuristic using batched greedy triple construction (BGTC). Experimental results show that ACO-Steiner can get a short running time and keep the high performance. Furthermore, it is also found that the ACO-Steiner can be easily extended to be used to some other problems, such as rectilinear Steiner minimal tree avoiding obstacles, and congestion reduction in global routing.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved