Models, code, and papers for "L":

Actor-Critic Algorithms for Learning Nash Equilibria in N-player General-Sum Games

Jul 02, 2015
H. L Prasad, L. A. Prashanth, Shalabh Bhatnagar

We consider the problem of finding stationary Nash equilibria (NE) in a finite discounted general-sum stochastic game. We first generalize a non-linear optimization problem from Filar and Vrieze [2004] to a $N$-player setting and break down this problem into simpler sub-problems that ensure there is no Bellman error for a given state and an agent. We then provide a characterization of solution points of these sub-problems that correspond to Nash equilibria of the underlying game and for this purpose, we derive a set of necessary and sufficient SG-SP (Stochastic Game - Sub-Problem) conditions. Using these conditions, we develop two actor-critic algorithms: OFF-SGSP (model-based) and ON-SGSP (model-free). Both algorithms use a critic that estimates the value function for a fixed policy and an actor that performs descent in the policy space using a descent direction that avoids local minima. We establish that both algorithms converge, in self-play, to the equilibria of a certain ordinary differential equation (ODE), whose stable limit points coincide with stationary NE of the underlying general-sum stochastic game. On a single state non-generic game (see Hart and Mas-Colell [2005]) as well as on a synthetic two-player game setup with $810,000$ states, we establish that ON-SGSP consistently outperforms NashQ ([Hu and Wellman, 2003] and FFQ [Littman, 2001] algorithms.


  Click for Model/Code and Paper
Matrix Games, Linear Programming, and Linear Approximation

Sep 12, 2006
L. N. Vaserstein

The following four classes of computational problems are equivalent: solving matrix games, solving linear programs, best $l^{\infty}$ linear approximation, best $l^1$ linear approximation.

* 5 pages 

  Click for Model/Code and Paper
ControlIt! - A Software Framework for Whole-Body Operational Space Control

Jun 02, 2015
C. -L. Fok, G. Johnson, J. D. Yamokoski, A. Mok, L. Sentis

Whole Body Operational Space Control (WBOSC) is a pioneering algorithm in the field of human-centered Whole-Body Control (WBC). It enables floating-base highly-redundant robots to achieve unified motion/force control of one or more operational space objectives while adhering to physical constraints. Limited studies exist on the software architecture and APIs that enable WBOSC to perform and be integrated into a larger system. In this paper we address this by presenting ControlIt!, a new open-source software framework for WBOSC. Unlike previous implementations, ControlIt! is multi-threaded to increase servo frequencies on standard PC hardware. A new parameter binding mechanism enables tight integration between ControlIt! and external processes via an extensible set of transport protocols. To support a new robot, only two plugins and a URDF model needs to be provided --- the rest of ControlIt! remains unchanged. New WBC primitives can be added by writing a Task or Constraint plugin. ControlIt!'s capabilities are demonstrated on Dreamer, a 16-DOF torque controlled humanoid upper body robot containing both series elastic and co-actuated joints, and using it to perform a product disassembly task. Using this testbed, we show that ControlIt! can achieve average servo latencies of about 0.5ms when configured with two Cartesian position tasks, two orientation tasks, and a lower priority posture task. This is significantly higher than the 5ms that was achieved using UTA-WBC, the prototype implementation of WBOSC that is both application and platform-specific. Variations in the product's position is handled by updating the goal of the Cartesian position task. ControlIt!'s source code is released under an LGPL license and we hope it will be adopted and maintained by the WBC community for the long term as a platform for WBC development and integration.


  Click for Model/Code and Paper
Landmark Diffusion Maps (L-dMaps): Accelerated manifold learning out-of-sample extension

Jun 28, 2017
Andrew W. Long, Andrew L. Ferguson

Diffusion maps are a nonlinear manifold learning technique based on harmonic analysis of a diffusion process over the data. Out-of-sample extensions with computational complexity $\mathcal{O}(N)$, where $N$ is the number of points comprising the manifold, frustrate applications to online learning applications requiring rapid embedding of high-dimensional data streams. We propose landmark diffusion maps (L-dMaps) to reduce the complexity to $\mathcal{O}(M)$, where $M \ll N$ is the number of landmark points selected using pruned spanning trees or k-medoids. Offering $(N/M)$ speedups in out-of-sample extension, L-dMaps enables the application of diffusion maps to high-volume and/or high-velocity streaming data. We illustrate our approach on three datasets: the Swiss roll, molecular simulations of a C$_{24}$H$_{50}$ polymer chain, and biomolecular simulations of alanine dipeptide. We demonstrate up to 50-fold speedups in out-of-sample extension for the molecular systems with less than 4% errors in manifold reconstruction fidelity relative to calculations over the full dataset.

* Submitted 

  Click for Model/Code and Paper
Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks

Oct 16, 2017
Peter L. Bartlett, Nick Harvey, Chris Liaw, Abbas Mehrabian

We prove new upper and lower bounds on the VC-dimension of deep neural networks with the ReLU activation function. These bounds are tight for almost the entire range of parameters. Letting $W$ be the number of weights and $L$ be the number of layers, we prove that the VC-dimension is $O(W L \log(W))$, and provide examples with VC-dimension $\Omega( W L \log(W/L) )$. This improves both the previously known upper bounds and lower bounds. In terms of the number $U$ of non-linear units, we prove a tight bound $\Theta(W U)$ on the VC-dimension. All of these bounds generalize to arbitrary piecewise linear activation functions, and also hold for the pseudodimensions of these function classes. Combined with previous results, this gives an intriguing range of dependencies of the VC-dimension on depth for networks with different non-linearities: there is no dependence for piecewise-constant, linear dependence for piecewise-linear, and no more than quadratic dependence for general piecewise-polynomial.

* Extended abstract appeared in COLT 2017; the upper bound was presented at the 2016 ACM Conference on Data Science. This version includes all the proofs and a refinement of the upper bound, Theorem 6. 16 pages, 2 figures 

  Click for Model/Code and Paper
Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks

Jun 18, 2018
Peter L. Bartlett, David P. Helmbold, Philip M. Long

We analyze algorithms for approximating a function $f(x) = \Phi x$ mapping $\Re^d$ to $\Re^d$ using deep linear neural networks, i.e. that learn a function $h$ parameterized by matrices $\Theta_1,...,\Theta_L$ and defined by $h(x) = \Theta_L \Theta_{L-1} ... \Theta_1 x$. We focus on algorithms that learn through gradient descent on the population quadratic loss in the case that the distribution over the inputs is isotropic. We provide polynomial bounds on the number of iterations for gradient descent to approximate the least squares matrix $\Phi$, in the case where the initial hypothesis $\Theta_1 = ... = \Theta_L = I$ has excess loss bounded by a small enough constant. On the other hand, we show that gradient descent fails to converge for $\Phi$ whose distance from the identity is a larger constant, and we show that some forms of regularization toward the identity in each layer do not help. If $\Phi$ is symmetric positive definite, we show that an algorithm that initializes $\Theta_i = I$ learns an $\epsilon$-approximation of $f$ using a number of updates polynomial in $L$, the condition number of $\Phi$, and $\log(d/\epsilon)$. In contrast, we show that if the least squares matrix $\Phi$ is symmetric and has a negative eigenvalue, then all members of a class of algorithms that perform gradient descent with identity initialization, and optionally regularize toward the identity in each layer, fail to converge. We analyze an algorithm for the case that $\Phi$ satisfies $u^{\top} \Phi u > 0$ for all $u$, but may not be symmetric. This algorithm uses two regularizers: one that maintains the invariant $u^{\top} \Theta_L \Theta_{L-1} ... \Theta_1 u > 0$ for all $u$, and another that "balances" $\Theta_1, ..., \Theta_L$ so that they have the same singular values.


  Click for Model/Code and Paper
Heavy hitters via cluster-preserving clustering

Apr 05, 2016
Kasper Green Larsen, Jelani Nelson, Huy L. Nguyen, Mikkel Thorup

In turnstile $\ell_p$ $\varepsilon$-heavy hitters, one maintains a high-dimensional $x\in\mathbb{R}^n$ subject to $\texttt{update}(i,\Delta)$ causing $x_i\leftarrow x_i + \Delta$, where $i\in[n]$, $\Delta\in\mathbb{R}$. Upon receiving a query, the goal is to report a small list $L\subset[n]$, $|L| = O(1/\varepsilon^p)$, containing every "heavy hitter" $i\in[n]$ with $|x_i| \ge \varepsilon \|x_{\overline{1/\varepsilon^p}}\|_p$, where $x_{\overline{k}}$ denotes the vector obtained by zeroing out the largest $k$ entries of $x$ in magnitude. For any $p\in(0,2]$ the CountSketch solves $\ell_p$ heavy hitters using $O(\varepsilon^{-p}\log n)$ words of space with $O(\log n)$ update time, $O(n\log n)$ query time to output $L$, and whose output after any query is correct with high probability (whp) $1 - 1/poly(n)$. Unfortunately the query time is very slow. To remedy this, the work [CM05] proposed for $p=1$ in the strict turnstile model, a whp correct algorithm achieving suboptimal space $O(\varepsilon^{-1}\log^2 n)$, worse update time $O(\log^2 n)$, but much better query time $O(\varepsilon^{-1}poly(\log n))$. We show this tradeoff between space and update time versus query time is unnecessary. We provide a new algorithm, ExpanderSketch, which in the most general turnstile model achieves optimal $O(\varepsilon^{-p}\log n)$ space, $O(\log n)$ update time, and fast $O(\varepsilon^{-p}poly(\log n))$ query time, and whp correctness. Our main innovation is an efficient reduction from the heavy hitters to a clustering problem in which each heavy hitter is encoded as some form of noisy spectral cluster in a much bigger graph, and the goal is to identify every cluster. Since every heavy hitter must be found, correctness requires that every cluster be found. We then develop a "cluster-preserving clustering" algorithm, partitioning the graph into clusters without destroying any original cluster.


  Click for Model/Code and Paper
Dimensionality Reduction for Tukey Regression

May 14, 2019
Kenneth L. Clarkson, Ruosong Wang, David P. Woodruff

We give the first dimensionality reduction methods for the overconstrained Tukey regression problem. The Tukey loss function $\|y\|_M = \sum_i M(y_i)$ has $M(y_i) \approx |y_i|^p$ for residual errors $y_i$ smaller than a prescribed threshold $\tau$, but $M(y_i)$ becomes constant for errors $|y_i| > \tau$. Our results depend on a new structural result, proven constructively, showing that for any $d$-dimensional subspace $L \subset \mathbb{R}^n$, there is a fixed bounded-size subset of coordinates containing, for every $y \in L$, all the large coordinates, with respect to the Tukey loss function, of $y$. Our methods reduce a given Tukey regression problem to a smaller weighted version, whose solution is a provably good approximate solution to the original problem. Our reductions are fast, simple and easy to implement, and we give empirical results demonstrating their practicality, using existing heuristic solvers for the small versions. We also give exponential-time algorithms giving provably good solutions, and hardness results suggesting that a significant speedup in the worst case is unlikely.

* To appear in ICML 2019 

  Click for Model/Code and Paper
Weighted bandits or: How bandits learn distorted values that are not expected

Nov 30, 2016
Aditya Gopalan, L. A. Prashanth, Michael Fu, Steve Marcus

Motivated by models of human decision making proposed to explain commonly observed deviations from conventional expected value preferences, we formulate two stochastic multi-armed bandit problems with distorted probabilities on the cost distributions: the classic $K$-armed bandit and the linearly parameterized bandit. In both settings, we propose algorithms that are inspired by Upper Confidence Bound (UCB), incorporate cost distortions, and exhibit sublinear regret assuming \holder continuous weight distortion functions. For the $K$-armed setting, we show that the algorithm, called W-UCB, achieves problem-dependent regret $O(L^2 M^2 \log n/ \Delta^{\frac{2}{\alpha}-1})$, where $n$ is the number of plays, $\Delta$ is the gap in distorted expected value between the best and next best arm, $L$ and $\alpha$ are the H\"{o}lder constants for the distortion function, and $M$ is an upper bound on costs, and a problem-independent regret bound of $O((KL^2M^2)^{\alpha/2}n^{(2-\alpha)/2})$. We also present a matching lower bound on the regret, showing that the regret of W-UCB is essentially unimprovable over the class of H\"{o}lder-continuous weight distortions. For the linearly parameterized setting, we develop a new algorithm, a variant of the Optimism in the Face of Uncertainty Linear bandit (OFUL) algorithm called WOFUL (Weight-distorted OFUL), and show that it has regret $O(d\sqrt{n} \; \mbox{polylog}(n))$ with high probability, for sub-Gaussian cost distributions. Finally, numerical examples demonstrate the advantages resulting from using distortion-aware learning algorithms.

* Longer version of the paper to be published as part of the proceedings of AAAI 2017 

  Click for Model/Code and Paper
The Good Old Davis-Putnam Procedure Helps Counting Models

Jun 01, 2011
E. Birnbaum, E. L. Lozinskii

As was shown recently, many important AI problems require counting the number of models of propositional formulas. The problem of counting models of such formulas is, according to present knowledge, computationally intractable in a worst case. Based on the Davis-Putnam procedure, we present an algorithm, CDP, that computes the exact number of models of a propositional CNF or DNF formula F. Let m and n be the number of clauses and variables of F, respectively, and let p denote the probability that a literal l of F occurs in a clause C of F, then the average running time of CDP is shown to be O(nm^d), where d=-1/log(1-p). The practical performance of CDP has been estimated in a series of experiments on a wide variety of CNF formulas.

* Journal Of Artificial Intelligence Research, Volume 10, pages 457-477, 1999 

  Click for Model/Code and Paper
Proceedings of the third "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'16)

Sep 14, 2016
V. Abrol, O. Absil, P. -A. Absil, S. Anthoine, P. Antoine, T. Arildsen, N. Bertin, F. Bleichrodt, J. Bobin, A. Bol, A. Bonnefoy, F. Caltagirone, V. Cambareri, C. Chenot, V. Crnojević, M. Daňková, K. Degraux, J. Eisert, J. M. Fadili, M. Gabrié, N. Gac, D. Giacobello, A. Gonzalez, C. A. Gomez Gonzalez, A. González, P. -Y. Gousenbourger, M. Græsbøll Christensen, R. Gribonval, S. Guérit, S. Huang, P. Irofti, L. Jacques, U. S. Kamilov, S. Kiticć, M. Kliesch, F. Krzakala, J. A. Lee, W. Liao, T. Lindstrøm Jensen, A. Manoel, H. Mansour, A. Mohammad-Djafari, A. Moshtaghpour, F. Ngolè, B. Pairet, M. Panić, G. Peyré, A. Pižurica, P. Rajmic, M. Roblin, I. Roth, A. K. Sao, P. Sharma, J. -L. Starck, E. W. Tramel, T. van Waterschoot, D. Vukobratovic, L. Wang, B. Wirth, G. Wunder, H. Zhang

The third edition of the "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) took place in Aalborg, the 4th largest city in Denmark situated beautifully in the northern part of the country, from the 24th to 26th of August 2016. The workshop venue was at the Aalborg University campus. One implicit objective of this biennial workshop is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For this third edition, iTWIST'16 gathered about 50 international participants and features 8 invited talks, 12 oral presentations, and 12 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing (e.g., optics, computer vision, genomics, biomedical, digital communication, channel estimation, astronomy); Application of sparse models in non-convex/non-linear inverse problems (e.g., phase retrieval, blind deconvolution, self calibration); Approximate probabilistic inference for sparse problems; Sparse machine learning and inference; "Blind" inverse problems and dictionary learning; Optimization for sparse modelling; Information theory, geometry and randomness; Sparsity? What's next? (Discrete-valued signals; Union of low-dimensional spaces, Cosparsity, mixed/group norm, model-based, low-complexity models, ...); Matrix/manifold sensing/processing (graph, low-rank approximation, ...); Complexity/accuracy tradeoffs in numerical methods/optimization; Electronic/optical compressive sensors (hardware).

* 69 pages, 22 extended abstracts, iTWIST'16 website: http://www.itwist16.es.aau.dk 

  Click for Model/Code and Paper
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

Oct 09, 2014
L. Jacques, C. De Vleeschouwer, Y. Boursier, P. Sudhakar, C. De Mol, A. Pizurica, S. Anthoine, P. Vandergheynst, P. Frossard, C. Bilen, S. Kitic, N. Bertin, R. Gribonval, N. Boumal, B. Mishra, P. -A. Absil, R. Sepulchre, S. Bundervoet, C. Schretter, A. Dooms, P. Schelkens, O. Chabiron, F. Malgouyres, J. -Y. Tourneret, N. Dobigeon, P. Chainais, C. Richard, B. Cornelis, I. Daubechies, D. Dunson, M. Dankova, P. Rajmic, K. Degraux, V. Cambareri, B. Geelen, G. Lafruit, G. Setti, J. -F. Determe, J. Louveaux, F. Horlin, A. Drémeau, P. Heas, C. Herzet, V. Duval, G. Peyré, A. Fawzi, M. Davies, N. Gillis, S. A. Vavasis, C. Soussen, L. Le Magoarou, J. Liang, J. Fadili, A. Liutkus, D. Martina, S. Gigan, L. Daudet, M. Maggioni, S. Minsker, N. Strawn, C. Mory, F. Ngole, J. -L. Starck, I. Loris, S. Vaiter, M. Golbabaee, D. Vukobratovic

The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.

* 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist14 

  Click for Model/Code and Paper
Machine Vision System for 3D Plant Phenotyping

Apr 28, 2017
Ayan Chaudhury, Christopher Ward, Ali Talasaz, Alexander G. Ivanov, Mark Brophy, Bernard Grodzinski, Norman P. A. Huner, Rajni V. Patel, John L. Barron

Machine vision for plant phenotyping is an emerging research area for producing high throughput in agriculture and crop science applications. Since 2D based approaches have their inherent limitations, 3D plant analysis is becoming state of the art for current phenotyping technologies. We present an automated system for analyzing plant growth in indoor conditions. A gantry robot system is used to perform scanning tasks in an automated manner throughout the lifetime of the plant. A 3D laser scanner mounted as the robot's payload captures the surface point cloud data of the plant from multiple views. The plant is monitored from the vegetative to reproductive stages in light/dark cycles inside a controllable growth chamber. An efficient 3D reconstruction algorithm is used, by which multiple scans are aligned together to obtain a 3D mesh of the plant, followed by surface area and volume computations. The whole system, including the programmable growth chamber, robot, scanner, data transfer and analysis is fully automated in such a way that a naive user can, in theory, start the system with a mouse click and get back the growth analysis results at the end of the lifetime of the plant with no intermediate intervention. As evidence of its functionality, we show and analyze quantitative results of the rhythmic growth patterns of the dicot Arabidopsis thaliana(L.), and the monocot barley (Hordeum vulgare L.) plants under their diurnal light/dark cycles.


  Click for Model/Code and Paper
Audio Spectrogram Representations for Processing with Convolutional Neural Networks

Jun 29, 2017
L. Wyse

One of the decisions that arise when designing a neural network for any application is how the data should be represented in order to be presented to, and possibly generated by, a neural network. For audio, the choice is less obvious than it seems to be for visual images, and a variety of representations have been used for different applications including the raw digitized sample stream, hand-crafted features, machine discovered features, MFCCs and variants that include deltas, and a variety of spectral representations. This paper reviews some of these representations and issues that arise, focusing particularly on spectrograms for generating audio using neural networks for style transfer.

* Proceedings of the First International Workshop on Deep Learning and Music joint with IJCNN. Anchorage, US. 1(1). pp 37-41 (2017) 
* Proceedings of the First International Conference on Deep Learning and Music, Anchorage, US, May, 2017 (arXiv:1706.08675v1 [cs.NE]) 

  Click for Model/Code and Paper
Describing Human Aesthetic Perception by Deeply-learned Attributes from Flickr

May 25, 2016
L. Zhang

Many aesthetic models in computer vision suffer from two shortcomings: 1) the low descriptiveness and interpretability of those hand-crafted aesthetic criteria (i.e., nonindicative of region-level aesthetics), and 2) the difficulty of engineering aesthetic features adaptively and automatically toward different image sets. To remedy these problems, we develop a deep architecture to learn aesthetically-relevant visual attributes from Flickr1, which are localized by multiple textual attributes in a weakly-supervised setting. More specifically, using a bag-ofwords (BoW) representation of the frequent Flickr image tags, a sparsity-constrained subspace algorithm discovers a compact set of textual attributes (e.g., landscape and sunset) for each image. Then, a weakly-supervised learning algorithm projects the textual attributes at image-level to the highly-responsive image patches at pixel-level. These patches indicate where humans look at appealing regions with respect to each textual attribute, which are employed to learn the visual attributes. Psychological and anatomical studies have shown that humans perceive visual concepts hierarchically. Hence, we normalize these patches and feed them into a five-layer convolutional neural network (CNN) to mimick the hierarchy of human perceiving the visual attributes. We apply the learned deep features on image retargeting, aesthetics ranking, and retrieval. Both subjective and objective experimental results thoroughly demonstrate the competitiveness of our approach.


  Click for Model/Code and Paper
Policy Gradients for CVaR-Constrained MDPs

May 12, 2014
Prashanth L. A.

We study a risk-constrained version of the stochastic shortest path (SSP) problem, where the risk measure considered is Conditional Value-at-Risk (CVaR). We propose two algorithms that obtain a locally risk-optimal policy by employing four tools: stochastic approximation, mini batches, policy gradients and importance sampling. Both the algorithms incorporate a CVaR estimation procedure, along the lines of Bardou et al. [2009], which in turn is based on Rockafellar-Uryasev's representation for CVaR and utilize the likelihood ratio principle for estimating the gradient of the sum of one cost function (objective of the SSP) and the gradient of the CVaR of the sum of another cost function (in the constraint of SSP). The algorithms differ in the manner in which they approximate the CVaR estimates/necessary gradients - the first algorithm uses stochastic approximation, while the second employ mini-batches in the spirit of Monte Carlo methods. We establish asymptotic convergence of both the algorithms. Further, since estimating CVaR is related to rare-event simulation, we incorporate an importance sampling based variance reduction scheme into our proposed algorithms.


  Click for Model/Code and Paper
Sharp Convergence Rates for Langevin Dynamics in the Nonconvex Setting

Sep 07, 2018
Xiang Cheng, Niladri S. Chatterji, Yasin Abbasi-Yadkori, Peter L. Bartlett, Michael I. Jordan

We study the problem of sampling from a distribution where the negative logarithm of the target density is $L$-smooth everywhere and $m$-strongly convex outside a ball of radius $R$, but potentially non-convex inside this ball. We study both overdamped and underdamped Langevin MCMC and prove upper bounds on the time required to obtain a sample from a distribution that is within $\epsilon$ of the target distribution in $1$-Wasserstein distance. For the first-order method (overdamped Langevin MCMC), the time complexity is $\tilde{\mathcal{O}}\left(e^{cLR^2}\frac{d}{\epsilon^2}\right)$, where $d$ is the dimension of the underlying space. For the second-order method (underdamped Langevin MCMC), the time complexity is $\tilde{\mathcal{O}}\left(e^{cLR^2}\frac{\sqrt{d}}{\epsilon}\right)$ for some explicit positive constant $c$. Surprisingly, the convergence rate is only polynomial in the dimension $d$ and the target accuracy $\epsilon$. It is however exponential in the problem parameter $LR^2$, which is a measure of non-logconcavity of the target distribution.

* 37 Pages 

  Click for Model/Code and Paper
Digital filters with vanishing moments for shape analysis

Dec 26, 2019
Hugh L. Kennedy

Shape- and scale-selective digital-filters, with steerable finite/infinite impulse responses (FIR/IIRs) and non-recursive/recursive realizations, that are separable in both spatial dimensions and adequately isotropic, are derived. The filters are conveniently designed in the frequency domain via derivative constraints at dc, which guarantees orthogonality and monomial selectivity in the pixel domain (i.e. vanishing moments), unlike more commonly used FIR filters derived from Gaussian functions. A two-stage low-pass/high-pass architecture, for blur/derivative operations, is recommended. Expressions for the coefficients of a low-order IIR blur filter with repeated poles are provided, as a function of scale; discrete Butterworth (IIR), and colored Savitzky-Golay (FIR), blurs are also examined. Parallel software implementations on central processing units (CPUs) and graphics processing units (GPUs), for scale-selective blob-detection in aerial surveillance imagery, are analyzed. It is shown that recursive IIR filters are significantly faster than non-recursive FIR filters when detecting large objects at coarse scales, i.e. using filters with long impulse responses; however, the margin of outperformance decreases as the degree of parallelization increases.

* H. L. Kennedy, Optimal digital design of steerable differentiators with the flatness of polynomial filters and the isotropy of Gaussian filters, SPIE Journal of Electronic Imaging, vol. 27, no. 5, 051219, May 2018 
* Revised Appendix (An introduction to recursive filtering for image processing). Added 2 worked examples 

  Click for Model/Code and Paper
srlearn: A Python Library for Gradient-Boosted Statistical Relational Models

Dec 17, 2019
Alexander L. Hayes

We present srlearn, a Python library for boosted statistical relational models. We adapt the scikit-learn interface to this setting and provide examples for how this can be used to express learning and inference problems.

* Ninth International Workshop on Statistical Relational AI (StarAI 2020). Software online at https://github.com/hayesall/srlearn 

  Click for Model/Code and Paper