Models, code, and papers for "Xin Yao":

Financial Series Prediction: Comparison Between Precision of Time Series Models and Machine Learning Methods

Dec 25, 2017
Xin-Yao Qian

Precise financial series predicting has long been a difficult problem because of unstableness and many noises within the series. Although Traditional time series models like ARIMA and GARCH have been researched and proved to be effective in predicting, their performances are still far from satisfying. Machine Learning, as an emerging research field in recent years, has brought about many incredible improvements in tasks such as regressing and classifying, and it's also promising to exploit the methodology in financial time series predicting. In this paper, the predicting precision of financial time series between traditional time series models and mainstream machine learning models including some state-of-the-art ones of deep learning are compared through experiment using real stock index data from history. The result shows that machine learning as a modern method far surpasses traditional models in precision.

* Personal term thesis 

  Click for Model/Code and Paper
Average Drift Analysis and Population Scalability

Jun 14, 2016
Jun He, Xin Yao

This paper aims to study how the population size affects the computation time of evolutionary algorithms in a rigorous way. The computation time of an evolutionary algorithm can be measured by either the expected number of generations (hitting time) or the expected number of fitness evaluations (running time) to find an optimal solution. Population scalability is the ratio of the expected hitting time between a benchmark algorithm and an algorithm using a larger population size. Average drift analysis is presented for comparing the expected hitting time of two algorithms and estimating lower and upper bounds on population scalability. Several intuitive beliefs are rigorously analysed. It is prove that (1) using a population sometimes increases rather than decreases the expected hitting time; (2) using a population cannot shorten the expected running time of any elitist evolutionary algorithm on unimodal functions in terms of the time-fitness landscape, but this is not true in terms of the distance-based fitness landscape; (3) using a population cannot always reduce the expected running time on fully-deceptive functions, which depends on the benchmark algorithm using elitist selection or random selection.


  Click for Model/Code and Paper
A Large Population Size Can Be Unhelpful in Evolutionary Algorithms

Aug 11, 2012
Tianshi Chen, Ke Tang, Guoliang Chen, Xin Yao

The utilization of populations is one of the most important features of evolutionary algorithms (EAs). There have been many studies analyzing the impact of different population sizes on the performance of EAs. However, most of such studies are based computational experiments, except for a few cases. The common wisdom so far appears to be that a large population would increase the population diversity and thus help an EA. Indeed, increasing the population size has been a commonly used strategy in tuning an EA when it did not perform as well as expected for a given problem. He and Yao (2002) showed theoretically that for some problem instance classes, a population can help to reduce the runtime of an EA from exponential to polynomial time. This paper analyzes the role of population further in EAs and shows rigorously that large populations may not always be useful. Conditions, under which large populations can be harmful, are discussed in this paper. Although the theoretical analysis was carried out on one multi-modal problem using a specific type of EAs, it has much wider implications. The analysis has revealed certain problem characteristics, which can be either the problem considered here or other problems, that lead to the disadvantages of large population sizes. The analytical approach developed in this paper can also be applied to analyzing EAs on other problems.

* Theoretical Computer Science, vol. 436, 2012, pp. 54-70 
* 25 pages, 1 figure 

  Click for Model/Code and Paper
A Simple Yet Effective Approach to Robust Optimization Over Time

Sep 05, 2019
Lukáš Adam, Xin Yao

Robust optimization over time (ROOT) refers to an optimization problem where its performance is evaluated over a period of future time. Most of the existing algorithms use particle swarm optimization combined with another method which predicts future solutions to the optimization problem. We argue that this approach may perform subpar and suggest instead a method based on a random sampling of the search space. We prove its theoretical guarantees and show that it significantly outperforms the state-of-the-art methods for ROOT.


  Click for Model/Code and Paper
What Weights Work for You? Adapting Weights for Any Pareto Front Shape in Decomposition-based Evolutionary Multi-Objective Optimisation

Sep 08, 2017
Miqing Li, Xin Yao

The quality of solution sets generated by decomposition-based evolutionary multiobjective optimisation (EMO) algorithms depends heavily on the consistency between a given problem's Pareto front shape and the specified weights' distribution. A set of weights distributed uniformly in a simplex often lead to a set of well-distributed solutions on a Pareto front with a simplex-like shape, but may fail on other Pareto front shapes. It is an open problem on how to specify a set of appropriate weights without the information of the problem's Pareto front beforehand. In this paper, we propose an approach to adapt the weights during the evolutionary process (called AdaW). AdaW progressively seeks a suitable distribution of weights for the given problem by elaborating five parts in the weight adaptation --- weight generation, weight addition, weight deletion, archive maintenance, and weight update frequency. Experimental results have shown the effectiveness of the proposed approach. AdaW works well for Pareto fronts with very different shapes: 1) the simplex-like, 2) the inverted simplex-like, 3) the highly nonlinear, 4) the disconnect, 5) the degenerated, 6) the badly-scaled, and 7) the high-dimensional.

* 22 pages, 19 figures 

  Click for Model/Code and Paper
Dominance Move: A Measure of Comparing Solution Sets in Multiobjective Optimization

Feb 01, 2017
Miqing Li, Xin Yao

One of the most common approaches for multiobjective optimization is to generate a solution set that well approximates the whole Pareto-optimal frontier to facilitate the later decision-making process. However, how to evaluate and compare the quality of different solution sets remains challenging. Existing measures typically require additional problem knowledge and information, such as a reference point or a substituted set of the Pareto-optimal frontier. In this paper, we propose a quality measure, called dominance move (DoM), to compare solution sets generated by multiobjective optimizers. Given two solution sets, DoM measures the minimum sum of move distances for one set to weakly Pareto dominate the other set. DoM can be seen as a natural reflection of the difference between two solutions, capturing all aspects of solution sets' quality, being compliant with Pareto dominance, and does not need any additional problem knowledge and parameters. We present an exact method to calculate the DoM in the biobjective case. We show the necessary condition of constructing the optimal partition for a solution set's minimum move, and accordingly propose an efficient algorithm to recursively calculate the DoM. Finally, DoM is evaluated on several groups of artificial and real test cases as well as by a comparison with two well-established quality measures.

* 23 pages, 10 figures 

  Click for Model/Code and Paper
A Unified Markov Chain Approach to Analysing Randomised Search Heuristics

Dec 09, 2013
Jun He, Feidun He, Xin Yao

The convergence, convergence rate and expected hitting time play fundamental roles in the analysis of randomised search heuristics. This paper presents a unified Markov chain approach to studying them. Using the approach, the sufficient and necessary conditions of convergence in distribution are established. Then the average convergence rate is introduced to randomised search heuristics and its lower and upper bounds are derived. Finally, novel average drift analysis and backward drift analysis are proposed for bounding the expected hitting time. A computational study is also conducted to investigate the convergence, convergence rate and expected hitting time. The theoretical study belongs to a prior and general study while the computational study belongs to a posterior and case study.


  Click for Model/Code and Paper
Scene Text Detection and Recognition: The Deep Learning Era

Dec 06, 2018
Shangbang Long, Xin He, Cong Yao

With the rise and development of deep learning, computer vision has been tremendously transformed and reshaped. As an important research area in computer vision, scene text detection and recognition has been inescapably influenced by this wave of revolution, consequentially entering the era of deep learning. In recent years, the community has witnessed substantial advancements in mindset, approach and performance. This survey is aimed at summarizing and analyzing the major changes and significant progresses of scene text detection and recognition in the deep learning era. Through this article, we devote to: (1) introduce new insights and ideas; (2) highlight recent techniques and benchmarks; (3) look ahead into future trends. Specifically, we will emphasize the dramatic differences brought by deep learning and the grand challenges still remained. We expect that this review paper would serve as a reference book for researchers in this field. Related resources are also collected and compiled in our Github repository: https://github.com/Jyouhou/SceneTextPapers.


  Click for Model/Code and Paper
On the Easiest and Hardest Fitness Functions

Feb 12, 2015
Jun He, Tianshi Chen, Xin Yao

The hardness of fitness functions is an important research topic in the field of evolutionary computation. In theory, the study can help understanding the ability of evolutionary algorithms. In practice, the study may provide a guideline to the design of benchmarks. The aim of this paper is to answer the following research questions: Given a fitness function class, which functions are the easiest with respect to an evolutionary algorithm? Which are the hardest? How are these functions constructed? The paper provides theoretical answers to these questions. The easiest and hardest fitness functions are constructed for an elitist (1+1) evolutionary algorithm to maximise a class of fitness functions with the same optima. It is demonstrated that the unimodal functions are the easiest and deceptive functions are the hardest in terms of the time-fitness landscape. The paper also reveals that the easiest fitness function to one algorithm may become the hardest to another algorithm, and vice versa.

* IEEE Transactions on evolutionary computation 19.2 (2015): 295-305 

  Click for Model/Code and Paper
On the Impact of Mutation-Selection Balance on the Runtime of Evolutionary Algorithms

Dec 14, 2010
Per Kristian Lehre, Xin Yao

The interplay between mutation and selection plays a fundamental role in the behaviour of evolutionary algorithms (EAs). However, this interplay is still not completely understood. This paper presents a rigorous runtime analysis of a non-elitist population-based EA that uses the linear ranking selection mechanism. The analysis focuses on how the balance between parameter $\eta$, controlling the selection pressure in linear ranking, and parameter $\chi$ controlling the bit-wise mutation rate, impacts the runtime of the algorithm. The results point out situations where a correct balance between selection pressure and mutation rate is essential for finding the optimal solution in polynomial time. In particular, it is shown that there exist fitness functions which can only be solved in polynomial time if the ratio between parameters $\eta$ and $\chi$ is within a narrow critical interval, and where a small change in this ratio can increase the runtime exponentially. Furthermore, it is shown quantitatively how the appropriate parameter choice depends on the characteristics of the fitness function. In addition to the original results on the runtime of EAs, this paper also introduces a very useful analytical tool, i.e., multi-type branching processes, to the runtime analysis of non-elitist population-based EAs.


  Click for Model/Code and Paper
Dynamic Multi-objective Optimization of the Travelling Thief Problem

Feb 07, 2020
Daniel Herring, Michael Kirley, Xin Yao

Investigation of detailed and complex optimisation problem formulations that reflect realistic scenarios is a burgeoning field of research. A growing body of work exists for the Travelling Thief Problem, including multi-objective formulations and comparisons of exact and approximate methods to solve it. However, as many realistic scenarios are non-static in time, dynamic formulations have yet to be considered for the TTP. Definition of dynamics within three areas of the TTP problem are addressed; in the city locations, availability map and item values. Based on the elucidation of solution conservation between initial sets and obtained non-dominated sets, we define a range of initialisation mechanisms using solutions generated via solvers, greedily and randomly. These are then deployed to seed the population after a change and the performance in terms of hypervolume and spread is presented for comparison. Across a range of problems with varying TSP-component and KP-component sizes, we observe interesting trends in line with existing conclusions; there is little benefit to using randomisation as a strategy for initialisation of solution populations when the optimal TSP and KP component solutions can be exploited. Whilst these separate optima don't guarantee good TTP solutions, when combined, provide better initial performance and therefore in some examined instances, provides the best response to dynamic changes. A combined approach that mixes solution generation methods to provide a composite population in response to dynamic changes provides improved performance in some instances for the different dynamic TTP formulations. Potential for further development of a more cooperative combined method are realised to more cohesively exploit known information about the problems.


  Click for Model/Code and Paper
Synergizing Domain Expertise with Self-Awareness in Software Systems: A Patternized Architecture Guideline

Jan 20, 2020
Tao Chen, Rami Bahsoon, Xin Yao

Architectural patterns provide a reusable architectural solution for commonly recurring problems that can assist in designing software systems. In this regard, self-awareness architectural patterns are specialized patterns that leverage good engineering practices and experiences to help in designing self-awareness and self-adaptation of a software system. However, domain knowledge and engineers' expertise that is built over time are not explicitly linked to these patterns and the self-aware process. This linkage is important, as it can enrich the design patterns of these systems, which consequently leads to more effective and efficient self-aware and self-adaptive behaviours. This paper is an introductory work that highlights the importance of synergizing domain expertise into the self-awareness in software systems, relying on well-defined underlying approaches. In particular, we present a holistic framework that classifies widely known representations used to obtain and maintain the domain expertise, documenting their nature and specifics rules that permits different levels of synergies with self-awareness. Drawing on such, we describe mechanisms that can enrich existing patterns with engineers' expertise and knowledge of the domain. This, together with the framework, allow us to codify an intuitive step-by-step methodology that guides engineer in making design decisions when synergizing domain expertise into self-awareness and reveal their importances, in an attempt to keep 'engineers-in-the-loop'. Through three case studies, we demonstrate how the enriched patterns, the proposed framework and methodology can be applied in different domains, within which we quantitatively compare the actual benefits of incorporating engineers' expertise into self-awareness, at alternative levels of synergies.

* submitted; 31 pages, 10 tables and 29 figures 

  Click for Model/Code and Paper
Negatively Correlated Search as a Parallel Exploration Search Strategy

Oct 16, 2019
Peng Yang, Ke Tang, Xin Yao

Parallel exploration is a key to a successful search. The recently proposed Negatively Correlated Search (NCS) achieved this ability by constructing a set of negatively correlated search processes and has been applied to many real-world problems. In NCS, the key technique is to explicitly model and maximize the diversity among search processes in parallel. However, the original diversity model was mostly devised by intuition, which introduced several drawbacks to NCS. In this paper, a mathematically principled diversity model is proposed to solve the existing drawbacks of NCS, resulting a new NCS framework. A new instantiation of NCS is also derived and its effectiveness is verified on a set of multi-modal continuous optimization problems.


  Click for Model/Code and Paper
Algorithm Portfolio for Individual-based Surrogate-Assisted Evolutionary Algorithms

Apr 22, 2019
Hao Tong, Jialin Liu, Xin Yao

Surrogate-assisted evolutionary algorithms (SAEAs) are powerful optimisation tools for computationally expensive problems (CEPs). However, a randomly selected algorithm may fail in solving unknown problems due to no free lunch theorems, and it will cause more computational resource if we re-run the algorithm or try other algorithms to get a much solution, which is more serious in CEPs. In this paper, we consider an algorithm portfolio for SAEAs to reduce the risk of choosing an inappropriate algorithm for CEPs. We propose two portfolio frameworks for very expensive problems in which the maximal number of fitness evaluations is only 5 times of the problem's dimension. One framework named Par-IBSAEA runs all algorithm candidates in parallel and a more sophisticated framework named UCB-IBSAEA employs the Upper Confidence Bound (UCB) policy from reinforcement learning to help select the most appropriate algorithm at each iteration. An effective reward definition is proposed for the UCB policy. We consider three state-of-the-art individual-based SAEAs on different problems and compare them to the portfolios built from their instances on several benchmark problems given limited computation budgets. Our experimental studies demonstrate that our proposed portfolio frameworks significantly outperform any single algorithm on the set of benchmark problems.


  Click for Model/Code and Paper
Learning Topological Representation for Networks via Hierarchical Sampling

Feb 15, 2019
Guoji Fu, Chengbin Hou, Xin Yao

The topological information is essential for studying the relationship between nodes in a network. Recently, Network Representation Learning (NRL), which projects a network into a low-dimensional vector space, has been shown their advantages in analyzing large-scale networks. However, most existing NRL methods are designed to preserve the local topology of a network, they fail to capture the global topology. To tackle this issue, we propose a new NRL framework, named HSRL, to help existing NRL methods capture both the local and global topological information of a network. Specifically, HSRL recursively compresses an input network into a series of smaller networks using a community-awareness compressing strategy. Then, an existing NRL method is used to learn node embeddings for each compressed network. Finally, the node embeddings of the input network are obtained by concatenating the node embeddings from all compressed networks. Empirical studies for link prediction on five real-world datasets demonstrate the advantages of HSRL over state-of-the-art methods.


  Click for Model/Code and Paper
A Parallel Divide-and-Conquer based Evolutionary Algorithm for Large-scale Optimization

Dec 06, 2018
Peng Yang, Ke Tang, Xin Yao

Large-scale optimization problems that involve thousands of decision variables have extensively arisen from various industrial areas. As a powerful optimization tool for many real-world applications, evolutionary algorithms (EAs) fail to solve the emerging large-scale problems both effectively and efficiently. In this paper, we propose a novel Divide-and-Conquer (DC) based EA that can not only produce high-quality solution by solving sub-problems separately, but also highly utilizes the power of parallel computing by solving the sub-problems simultaneously. Existing DC-based EAs that were deemed to enjoy the same advantages of the proposed algorithm, are shown to be practically incompatible with the parallel computing scheme, unless some trade-offs are made by compromising the solution quality.

* 12 pages, 0 figures 

  Click for Model/Code and Paper
Automatic Construction of Parallel Portfolios via Explicit Instance Grouping

Apr 17, 2018
Shengcai Liu, Ke Tang, Xin Yao

Simultaneously utilizing several complementary solvers is a simple yet effective strategy for solving computationally hard problems. However, manually building such solver portfolios typically requires considerable domain knowledge and plenty of human effort. As an alternative, automatic construction of parallel portfolios (ACPP) aims at automatically building effective parallel portfolios based on a given problem instance set and a given rich design space. One promising way to solve the ACPP problem is to explicitly group the instances into different subsets and promote a component solver to handle each of them.This paper investigates solving ACPP from this perspective, and especially studies how to obtain a good instance grouping.The experimental results showed that the parallel portfolios constructed by the proposed method could achieve consistently superior performances to the ones constructed by the state-of-the-art ACPP methods,and could even rival sophisticated hand-designed parallel solvers.


  Click for Model/Code and Paper
Experience-based Optimization: A Coevolutionary Approach

Apr 17, 2018
Shengcai Liu, Ke Tang, Xin Yao

This paper studies improving solvers based on their past solving experiences, and focuses on improving solvers by offline training. Specifically, the key issues of offline training methods are discussed, and research belonging to this category but from different areas are reviewed in a unified framework. Existing training methods generally adopt a two-stage strategy in which selecting the training instances and training instances are treated in two independent phases. This paper proposes a new training method, dubbed LiangYi, which addresses these two issues simultaneously. LiangYi includes a training module for a population-based solver and an instance sampling module for updating the training instances. The idea behind LiangYi is to promote the population-based solver by training it (with the training module) to improve its performance on those instances (discovered by the sampling module) on which it performs badly, while keeping the good performances obtained by it on previous instances. An instantiation of LiangYi on the Travelling Salesman Problem is also proposed. Empirical results on a huge testing set containing 10000 instances showed LiangYi could train solvers that perform significantly better than the solvers trained by other state-of-the-art training method. Moreover, empirical investigation of the behaviours of LiangYi confirmed it was able to continuously improve the solver through training.


  Click for Model/Code and Paper
Kernel Truncated Regression Representation for Robust Subspace Clustering

May 23, 2017
Liangli Zhen, Dezhong Peng, Xin Yao

Subspace clustering aims to group data points into multiple clusters of which each corresponds to one subspace. Most existing subspace clustering methods assume that the data could be linearly represented with each other in the input space. In practice, however, this assumption is hard to be satisfied. To achieve nonlinear subspace clustering, we propose a novel method which consists of the following three steps: 1) projecting the data into a hidden space in which the data can be linearly reconstructed from each other; 2) calculating the globally linear reconstruction coefficients in the kernel space; 3) truncating the trivial coefficients to achieve robustness and block-diagonality, and then achieving clustering by solving a graph Laplacian problem. Our method has the advantages of a closed-form solution and capacity of clustering data points that lie in nonlinear subspaces. The first advantage makes our method efficient in handling large-scale data sets, and the second one enables the proposed method to address the nonlinear subspace clustering challenge. Extensive experiments on five real-world datasets demonstrate the effectiveness and the efficiency of the proposed method in comparison with ten state-of-the-art approaches regarding four evaluation metrics.

* 12 pages 

  Click for Model/Code and Paper