Models, code, and papers for "John Quan":

On the Local Correctness of L^1 Minimization for Dictionary Learning

Jan 29, 2011
Quan Geng, Huan Wang, John Wright

The idea that many important classes of signals can be well-represented by linear combinations of a small set of atoms selected from a given dictionary has had dramatic impact on the theory and practice of signal processing. For practical problems in which an appropriate sparsifying dictionary is not known ahead of time, a very popular and successful heuristic is to search for a dictionary that minimizes an appropriate sparsity surrogate over a given set of sample data. While this idea is appealing, the behavior of these algorithms is largely a mystery; although there is a body of empirical evidence suggesting they do learn very effective representations, there is little theory to guarantee when they will behave correctly, or when the learned dictionary can be expected to generalize. In this paper, we take a step towards such a theory. We show that under mild hypotheses, the dictionary learning problem is locally well-posed: the desired solution is indeed a local minimum of the $\ell^1$ norm. Namely, if $\mb A \in \Re^{m \times n}$ is an incoherent (and possibly overcomplete) dictionary, and the coefficients $\mb X \in \Re^{n \times p}$ follow a random sparse model, then with high probability $(\mb A,\mb X)$ is a local minimum of the $\ell^1$ norm over the manifold of factorizations $(\mb A',\mb X')$ satisfying $\mb A' \mb X' = \mb Y$, provided the number of samples $p = \Omega(n^3 k)$. For overcomplete $\mb A$, this is the first result showing that the dictionary learning problem is locally solvable. Our analysis draws on tools developed for the problem of completing a low-rank matrix from a small subset of its entries, which allow us to overcome a number of technical obstacles; in particular, the absence of the restricted isometry property.

* 37 pages, 1 figure 

  Click for Model/Code and Paper
Prioritized Experience Replay

Feb 25, 2016
Tom Schaul, John Quan, Ioannis Antonoglou, David Silver

Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.

* Published at ICLR 2016 

  Click for Model/Code and Paper
Fully Supervised Speaker Diarization

Oct 27, 2018
Aonan Zhang, Quan Wang, Zhenyao Zhu, John Paisley, Chong Wang

In this paper, we propose a fully supervised speaker diarization approach, named unbounded interleaved-state recurrent neural networks (UIS-RNN). Given extracted speaker-discriminative embeddings (a.k.a. d-vectors) from input utterances, each individual speaker is modeled by a parameter-sharing RNN, while the RNN states for different speakers interleave in the time domain. This RNN is naturally integrated with a distance-dependent Chinese restaurant process (ddCRP) to accommodate an unknown number of speakers. Our system is fully supervised and is able to learn from examples where time-stamped speaker labels are annotated. We achieved a 7.6% diarization error rate on NIST SRE 2000 CALLHOME, which is better than the state-of-the-art method using spectral clustering. Moreover, our method decodes in an online fashion while most state-of-the-art systems rely on offline clustering.

* To be submitted to ICASSP 2019 

  Click for Model/Code and Paper
General non-linear Bellman equations

Jul 08, 2019
Hado van Hasselt, John Quan, Matteo Hessel, Zhongwen Xu, Diana Borsa, Andre Barreto

We consider a general class of non-linear Bellman equations. These open up a design space of algorithms that have interesting properties, which has two potential advantages. First, we can perhaps better model natural phenomena. For instance, hyperbolic discounting has been proposed as a mathematical model that matches human and animal data well, and can therefore be used to explain preference orderings. We present a different mathematical model that matches the same data, but that makes very different predictions under other circumstances. Second, the larger design space can perhaps lead to algorithms that perform better, similar to how discount factors are often used in practice even when the true objective is undiscounted. We show that many of the resulting Bellman operators still converge to a fixed point, and therefore that the resulting algorithms are reasonable and inherit many beneficial properties of their linear counterparts.


  Click for Model/Code and Paper
Distributed Prioritized Experience Replay

Mar 02, 2018
Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado van Hasselt, David Silver

We propose a distributed architecture for deep reinforcement learning at scale, that enables agents to learn effectively from orders of magnitude more data than previously possible. The algorithm decouples acting from learning: the actors interact with their own instances of the environment by selecting actions according to a shared neural network, and accumulate the resulting experience in a shared experience replay memory; the learner replays samples of experience and updates the neural network. The architecture relies on prioritized experience replay to focus only on the most significant data generated by the actors. Our architecture substantially improves the state of the art on the Arcade Learning Environment, achieving better final performance in a fraction of the wall-clock training time.

* Accepted to International Conference on Learning Representations 2018 

  Click for Model/Code and Paper
Reproducibility of an airway tapering measurement in CT with application to bronchiectasis

Sep 16, 2019
Kin Quan, Ryutaro Tanno, Rebecca J. Shipley, Jeremy S. Brown, Joseph Jacob, John R. Hurst, David J. Hawkes

Purpose: This paper proposes a pipeline to acquire a scalar tapering measurement from the carina to the most distal point of an individual airway visible on CT. We show the applicability of using tapering measurements on clinically acquired data by quantifying the reproducibility of the tapering measure. Methods: We generate a spline from the centreline of an airway to measure the area and arclength at contiguous intervals. The tapering measurement is the gradient of the linear regression between area in log space and arclength. The reproducibility of the measure was assessed by analysing different radiation doses, voxel sizes and reconstruction kernel on single timepoint and longitudinal CT scans and by evaluating the effct of airway bifurcations. Results: Using 74 airways from 10 CT scans, we show a statistical difference, p = 3.4 $\times$ 10$^{-4}$ in tapering between healthy airways (n = 35) and those affected by bronchiectasis (n = 39). The difference between the mean of the two populations was 0.011mm$^{-1}$ and the difference between the medians of the two populations was 0.006mm$^{-1}$. The tapering measurement retained a 95\% confidence interval of $\pm$0.005mm$^{-1}$ in a simulated 25 mAs scan and retained a 95% confidence of $\pm$0.005mm$^{-1}$ on simulated CTs up to 1.5 times the original voxel size. Conclusion: We have established an estimate of the precision of the tapering measurement and estimated the effect on precision of simulated voxel size and CT scan dose. We recommend that the scanner calibration be undertaken with the phantoms as described, on the specific CT scanner, radiation dose and reconstruction algorithm that is to be used in any quantitative studies. Our code is available at https://github.com/quan14/AirwayTaperingInCT

* J. Med. Imag. 6(3), 034003 (2019) 
* 55 pages, 18 figures, The manuscript was originally published in Journal of Medical Imaging 

  Click for Model/Code and Paper
Universal Successor Features Approximators

Dec 18, 2018
Diana Borsa, André Barreto, John Quan, Daniel Mankowitz, Rémi Munos, Hado van Hasselt, David Silver, Tom Schaul

The ability of a reinforcement learning (RL) agent to learn about many reward functions at the same time has many potential benefits, such as the decomposition of complex tasks into simpler ones, the exchange of information between tasks, and the reuse of skills. We focus on one aspect in particular, namely the ability to generalise to unseen tasks. Parametric generalisation relies on the interpolation power of a function approximator that is given the task description as input; one of its most common form are universal value function approximators (UVFAs). Another way to generalise to new tasks is to exploit structure in the RL problem itself. Generalised policy improvement (GPI) combines solutions of previous tasks into a policy for the unseen task; this relies on instantaneous policy evaluation of old policies under the new reward function, which is made possible through successor features (SFs). Our proposed universal successor features approximators (USFAs) combine the advantages of all of these, namely the scalability of UVFAs, the instant inference of SFs, and the strong generalisation of GPI. We discuss the challenges involved in training a USFA, its generalisation properties and demonstrate its practical benefits and transfer abilities on a large-scale domain in which the agent has to navigate in a first-person perspective three-dimensional environment.


  Click for Model/Code and Paper
Transfer in Deep Reinforcement Learning Using Successor Features and Generalised Policy Improvement

Jan 30, 2019
André Barreto, Diana Borsa, John Quan, Tom Schaul, David Silver, Matteo Hessel, Daniel Mankowitz, Augustin Žídek, Rémi Munos

The ability to transfer skills across tasks has the potential to scale up reinforcement learning (RL) agents to environments currently out of reach. Recently, a framework based on two ideas, successor features (SFs) and generalised policy improvement (GPI), has been introduced as a principled way of transferring skills. In this paper we extend the SFs & GPI framework in two ways. One of the basic assumptions underlying the original formulation of SFs & GPI is that rewards for all tasks of interest can be computed as linear combinations of a fixed set of features. We relax this constraint and show that the theoretical guarantees supporting the framework can be extended to any set of tasks that only differ in the reward function. Our second contribution is to show that one can use the reward functions themselves as features for future tasks, without any loss of expressiveness, thus removing the need to specify a set of features beforehand. This makes it possible to combine SFs & GPI with deep learning in a more stable way. We empirically verify this claim on a complex 3D environment where observations are images from a first-person perspective. We show that the transfer promoted by SFs & GPI leads to very good policies on unseen tasks almost instantaneously. We also describe how to learn policies specialised to the new tasks in a way that allows them to be added to the agent's set of skills, and thus be reused in the future.

* Published at ICML 2018 

  Click for Model/Code and Paper
Distral: Robust Multitask Reinforcement Learning

Jul 13, 2017
Yee Whye Teh, Victor Bapst, Wojciech Marian Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, Razvan Pascanu

Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (Distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a "distilled" policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust and more stable---attributes that are critical in deep reinforcement learning.


  Click for Model/Code and Paper
Tapering Analysis of Airways with Bronchiectasis

Sep 14, 2019
Kin Quan, Rebecca J. Shipley, Ryutaro Tanno, Graeme McPhillips, Vasileios Vavourakis, David Edwards, Joseph Jacob, John R. Hurst, David J. Hawkes

Bronchiectasis is the permanent dilation of airways. Patients with the disease can suffer recurrent exacerbations, reducing their quality of life. The gold standard to diagnose and monitor bronchiectasis is accomplished by inspection of chest computed tomography (CT) scans. A clinician examines the broncho-arterial ratio to determine if an airway is brochiectatic. The visual analysis assumes the blood vessel diameter remains constant, although this assumption is disputed in the literature. We propose a simple measurement of tapering along the airways to diagnose and monitor bronchiectasis. To this end, we constructed a pipeline to measure the cross-sectional area along the airways at contiguous intervals, starting from the carina to the most distal point observable. Using a phantom with calibrated 3D printed structures, the precision and accuracy of our algorithm extends to the sub voxel level. The tapering measurement is robust to bifurcations along the airway and was applied to chest CT images acquired in clinical practice. The result is a statistical difference in tapering rate between airways with bronchiectasis and controls. Our code is available at https://github.com/quan14/AirwayTaperingInCT.

* Proc. SPIE 10574, Medical Imaging 2018: Image Processing, 105742G (2 March 2018) 
* 12 pages, 7 figures. Previously submitted for SPIE Medical Imaging, 2018, Houston, Texas, United States 

  Click for Model/Code and Paper
Modelling Airway Geometry as Stock Market Data using Bayesian Changepoint Detection

Jun 28, 2019
Kin Quan, Ryutaro Tanno, Michael Duong, Arjun Nair, Rebecca Shipley, Mark Jones, Christopher Brereton, John Hurst, David Hawkes, Joseph Jacob

Numerous lung diseases, such as idiopathic pulmonary fibrosis (IPF), exhibit dilation of the airways. Accurate measurement of dilatation enables assessment of the progression of disease. Unfortunately the combination of image noise and airway bifurcations causes high variability in the profiles of cross-sectional areas, rendering the identification of affected regions very difficult. Here we introduce a noise-robust method for automatically detecting the location of progressive airway dilatation given two profiles of the same airway acquired at different time points. We propose a probabilistic model of abrupt relative variations between profiles and perform inference via Reversible Jump Markov Chain Monte Carlo sampling. We demonstrate the efficacy of the proposed method on two datasets; (i) images of healthy airways with simulated dilatation; (ii) pairs of real images of IPF-affected airways acquired at 1 year intervals. Our model is able to detect the starting location of airway dilatation with an accuracy of 2.5mm on simulated data. The experiments on the IPF dataset display reasonable agreement with radiologists. We can compute a relative change in airway volume that may be useful for quantifying IPF disease progression.

* 14 pages, 7 figures 

  Click for Model/Code and Paper
VoiceFilter: Targeted Voice Separation by Speaker-Conditioned Spectrogram Masking

Oct 27, 2018
Quan Wang, Hannah Muckenhirn, Kevin Wilson, Prashant Sridhar, Zelin Wu, John Hershey, Rif A. Saurous, Ron J. Weiss, Ye Jia, Ignacio Lopez Moreno

In this paper, we present a novel system that separates the voice of a target speaker from multi-speaker signals, by making use of a reference signal from the target speaker. We achieve this by training two separate neural networks: (1) A speaker recognition network that produces speaker-discriminative embeddings; (2) A spectrogram masking network that takes both noisy spectrogram and speaker embedding as input, and produces a mask. Our system significantly reduces the speech recognition WER on multi-speaker signals, with minimal WER degradation on single-speaker signals.

* To be submitted to ICASSP 2019 

  Click for Model/Code and Paper
Unicorn: Continual Learning with a Universal, Off-policy Agent

Jul 03, 2018
Daniel J. Mankowitz, Augustin Žídek, André Barreto, Dan Horgan, Matteo Hessel, John Quan, Junhyuk Oh, Hado van Hasselt, David Silver, Tom Schaul

Some real-world domains are best characterized as a single task, but for others this perspective is limiting. Instead, some tasks continually grow in complexity, in tandem with the agent's competence. In continual learning, also referred to as lifelong learning, there are no explicit task boundaries or curricula. As learning agents have become more powerful, continual learning remains one of the frontiers that has resisted quick progress. To test continual learning capabilities we consider a challenging 3D domain with an implicit sequence of tasks and sparse rewards. We propose a novel agent architecture called Unicorn, which demonstrates strong continual learning and outperforms several baseline agents on the proposed domain. The agent achieves this by jointly representing and learning multiple policies efficiently, using a parallel off-policy learning setup.


  Click for Model/Code and Paper
Observe and Look Further: Achieving Consistent Performance on Atari

May 29, 2018
Tobias Pohlen, Bilal Piot, Todd Hester, Mohammad Gheshlaghi Azar, Dan Horgan, David Budden, Gabriel Barth-Maron, Hado van Hasselt, John Quan, Mel Večerík, Matteo Hessel, Rémi Munos, Olivier Pietquin

Despite significant advances in the field of deep Reinforcement Learning (RL), today's algorithms still fail to learn human-level policies consistently over a set of diverse tasks such as Atari 2600 games. We identify three key challenges that any algorithm needs to master in order to perform well on all games: processing diverse reward distributions, reasoning over long time horizons, and exploring efficiently. In this paper, we propose an algorithm that addresses each of these challenges and is able to learn human-level policies on nearly all Atari games. A new transformed Bellman operator allows our algorithm to process rewards of varying densities and scales; an auxiliary temporal consistency loss allows us to train stably using a discount factor of $\gamma = 0.999$ (instead of $\gamma = 0.99$) extending the effective planning horizon by an order of magnitude; and we ease the exploration problem by using human demonstrations that guide the agent towards rewarding states. When tested on a set of 42 Atari games, our algorithm exceeds the performance of an average human on 40 games using a common set of hyper parameters. Furthermore, it is the first deep RL algorithm to solve the first level of Montezuma's Revenge.


  Click for Model/Code and Paper
Deep Q-learning from Demonstrations

Nov 22, 2017
Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys

Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.

* Published at AAAI 2018. Previously on arxiv as "Learning from Demonstrations for Real World Reinforcement Learning" 

  Click for Model/Code and Paper
Overcoming catastrophic forgetting in neural networks

Jan 25, 2017
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, Raia Hadsell

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.


  Click for Model/Code and Paper
StarCraft II: A New Challenge for Reinforcement Learning

Aug 16, 2017
Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing

This paper introduces SC2LE (StarCraft II Learning Environment), a reinforcement learning environment based on the StarCraft II game. This domain poses a new grand challenge for reinforcement learning, representing a more difficult class of problems than considered in most prior work. It is a multi-agent problem with multiple players interacting; there is imperfect information due to a partially observed map; it has a large action space involving the selection and control of hundreds of units; it has a large state space that must be observed solely from raw input feature planes; and it has delayed credit assignment requiring long-term strategies over thousands of steps. We describe the observation, action, and reward specification for the StarCraft II domain and provide an open source Python-based interface for communicating with the game engine. In addition to the main game maps, we provide a suite of mini-games focusing on different elements of StarCraft II gameplay. For the main game maps, we also provide an accompanying dataset of game replay data from human expert players. We give initial baseline results for neural networks trained from this data to predict game outcomes and player actions. Finally, we present initial baseline results for canonical deep reinforcement learning agents applied to the StarCraft II domain. On the mini-games, these agents learn to achieve a level of play that is comparable to a novice player. However, when trained on the main game, these agents are unable to make significant progress. Thus, SC2LE offers a new and challenging environment for exploring deep reinforcement learning algorithms and architectures.

* Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2 tables 

  Click for Model/Code and Paper
Rapid-Rate: A Framework for Semi-supervised Real-time Sentiment Trend Detection in Unstructured Big Data

Mar 25, 2017
Vineet John

Commercial establishments like restaurants, service centres and retailers have several sources of customer feedback about products and services, most of which need not be as structured as rated reviews provided by services like Yelp, or Amazon, in terms of sentiment conveyed. For instance, Amazon provides a fine-grained score on a numeric scale for product reviews. Some sources, however, like social media (Twitter, Facebook), mailing lists (Google Groups) and forums (Quora) contain text data that is much more voluminous, but unstructured and unlabelled. It might be in the best interests of a business establishment to assess the general sentiment towards their brand on these platforms as well. This text could be pipelined into a system with a built-in prediction model, with the objective of generating real-time graphs on opinion and sentiment trends. Although such tasks like the one described about have been explored with respect to document classification problems in the past, the implementation described in this paper, by virtue of learning a continuous function rather than a discrete one, offers a lot more depth of insight as compared to document classification approaches. This study aims to explore the validity of such a continuous function predicting model to quantify sentiment about an entity, without the additional overhead of manual labelling, and computational preprocessing & feature extraction. This research project also aims to design and implement a re-usable document regression pipeline as a framework, Rapid-Rate, that can be used to predict document scores in real-time.

* 11 pages, 6 figures 

  Click for Model/Code and Paper
Modeling documents with Generative Adversarial Networks

Dec 29, 2016
John Glover

This paper describes a method for using Generative Adversarial Networks to learn distributed representations of natural language documents. We propose a model that is based on the recently proposed Energy-Based GAN, but instead uses a Denoising Autoencoder as the discriminator network. Document representations are extracted from the hidden layer of the discriminator and evaluated both quantitatively and qualitatively.


  Click for Model/Code and Paper
Pignistic Probability Transforms for Mixes of Low- and High-Probability Events

May 27, 2015
John J. Sudano

In some real world information fusion situations, time critical decisions must be made with an incomplete information set. Belief function theories (e.g., Dempster-Shafer theory of evidence, Transferable Belief Model) have been shown to provide a reasonable methodology for processing or fusing the quantitative clues or information measurements that form the incomplete information set. For decision making, the pignistic (from the Latin pignus, a bet) probability transform has been shown to be a good method of using Beliefs or basic belief assignments (BBAs) to make decisions. For many systems, one need only address the most-probable elements in the set. For some critical systems, one must evaluate the risk of wrong decisions and establish safe probability thresholds for decision making. This adds a greater complexity to decision making, since one must address all elements in the set that are above the risk decision threshold. The problem is greatly simplified if most of the probabilities fall below this threshold. Finding a probability transform that properly represents mixes of low- and high-probability events is essential. This article introduces four new pignistic probability transforms with an implementation that uses the latest values of Beliefs, Plausibilities, or BBAs to improve the pignistic probability estimates. Some of them assign smaller values of probabilities for smaller values of Beliefs or BBAs than the Smets pignistic transform. They also assign higher probability values for larger values of Beliefs or BBAs than the Smets pignistic transform. These probability transforms will assign a value of probability that converges faster to the values below the risk threshold. A probability information content (PIC) variable is also introduced that assigns an information content value to any set of probability. Four operators are defined to help simplify the derivations.

* Fourth International Conference on Information Fusion, August 2001, Montreal. Pages TPUB3 23-27 
* 7 pages, International Society of Information Fusion Conference Proceedings Fusion 2001 at Montreal, Quebec, Canada 

  Click for Model/Code and Paper