A Century of Portraits: A Visual Historical Record of American High School Yearbooks

Nov 09, 2015

Shiry Ginosar, Kate Rakelly, Sarah Sachs, Brian Yin, Alexei A. Efros

Nov 09, 2015

Shiry Ginosar, Kate Rakelly, Sarah Sachs, Brian Yin, Alexei A. Efros

* ICCV 2015 Extreme Imaging Workshop

**Click to Read Paper**

Applying Distributional Compositional Categorical Models of Meaning to Language Translation

Nov 08, 2018

Brian Tyrrell

Nov 08, 2018

Brian Tyrrell

* EPTCS 283, 2018, pp. 28-49

* In Proceedings CAPNS 2018, arXiv:1811.02701

**Click to Read Paper**

**Click to Read Paper**

* 29 pages, 3 figures, undergraduate thesis

**Click to Read Paper**

Contemporary machine learning: a guide for practitioners in the physical sciences

Dec 20, 2017

Brian K. Spears

Dec 20, 2017

Brian K. Spears

* 29 pages, 16 figures

**Click to Read Paper**

Complexity Bounds for the Controllability of Temporal Networks with Conditions, Disjunctions, and Uncertainty

Jan 08, 2019

Nikhil Bhargava, Brian Williams

Jan 08, 2019

Nikhil Bhargava, Brian Williams

**Click to Read Paper**

Jakob Bernoulli, working in the late 17th century, identified a gap in contemporary probability theory. He cautioned that it was inadequate to specify force of proof (probability of provability) for some kinds of uncertain arguments. After 300 years, this gap remains in present-day probability theory. We present axioms analogous to Kolmogorov's axioms for probability, specifying uncertainty that lies in an argument's inference/implication itself rather than in its premise and conclusion. The axioms focus on arguments spanning two Boolean algebras, but generalize the obligatory: "force of proof of A implies B is the probability of B or not A" in the case that the Boolean algebras are identical. We propose a categorical framework that relies on generalized probabilities (objects) to express uncertainty in premises, to mix with arguments (morphisms) to express uncertainty embedded directly in inference/implication. There is a direct application to Shafer's evidence theory (Dempster-Shafer theory), greatly expanding its scope for applications. Therefore, we can offer this framework not only as an optimal solution to a difficult historical puzzle, but also to advance the frontiers of contemporary artificial intelligence. Keywords: force of proof, probability of provability, Ars Conjectandi, non additive probabilities, evidence theory.

**Click to Read Paper**
A Statistical Approach to Continuous Self-Calibrating Eye Gaze Tracking for Head-Mounted Virtual Reality Systems

Dec 20, 2016

Subarna Tripathi, Brian Guenter

Dec 20, 2016

Subarna Tripathi, Brian Guenter

* Accepted for publication in WACV 2017

**Click to Read Paper**

Log-Normal Matrix Completion for Large Scale Link Prediction

Jan 28, 2016

Brian Mohtashemi, Thomas Ketseoglou

Jan 28, 2016

Brian Mohtashemi, Thomas Ketseoglou

* 6 pages

**Click to Read Paper**

* Presented at ISIT (IEEE Intnl. Symp. on Information Theory), 2015. Submitted to IEEE Transactions on Information Theory. This version: changes are in blue; the main changes are just to explain the model assumptions better (added based on ISIT reviewers' comments)

**Click to Read Paper**

* Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)

**Click to Read Paper**

Revisiting k-means: New Algorithms via Bayesian Nonparametrics

Jun 14, 2012

Brian Kulis, Michael I. Jordan

Jun 14, 2012

Brian Kulis, Michael I. Jordan

* 14 pages. Updated based on the corresponding ICML paper

**Click to Read Paper**

Subspace clustering of high-dimensional data: a predictive approach

Mar 05, 2012

Brian McWilliams, Giovanni Montana

In several application domains, high-dimensional observations are collected and then analysed in search for naturally occurring data clusters which might provide further insights about the nature of the problem. In this paper we describe a new approach for partitioning such high-dimensional data. Our assumption is that, within each cluster, the data can be approximated well by a linear subspace estimated by means of a principal component analysis (PCA). The proposed algorithm, Predictive Subspace Clustering (PSC) partitions the data into clusters while simultaneously estimating cluster-wise PCA parameters. The algorithm minimises an objective function that depends upon a new measure of influence for PCA models. A penalised version of the algorithm is also described for carrying our simultaneous subspace clustering and variable selection. The convergence of PSC is discussed in detail, and extensive simulation results and comparisons to competing methods are presented. The comparative performance of PSC has been assessed on six real gene expression data sets for which PSC often provides state-of-art results.
Mar 05, 2012

Brian McWilliams, Giovanni Montana

**Click to Read Paper**

Dynamic Weight Alignment for Temporal Convolutional Neural Networks

Sep 06, 2018

Brian Kenji Iwana, Seiichi Uchida

In this paper, we propose a method of improving Convolutional Neural Networks (CNN) by determining the optimal alignment of weights and inputs using dynamic programming. Conventional CNNs convolve learnable shared weights, or filters, across the input data. These filters use an inner product to linearly match the shared weights to a window of the input. However, it is possible that there exists a more optimal alignment of weights. Thus, we propose the use of Dynamic Time Warping (DTW) to dynamically align the weights to optimized input elements. This dynamic alignment is especially useful for time series recognition due to the complexities with temporal distortions, such as varying rates and sequence lengths. We demonstrate the effectiveness of the proposed architecture on the Unipen online handwritten digit and character datasets, the UCI Spoken Arabic Digit dataset, and the UCI Activities of Daily Life dataset.
Sep 06, 2018

Brian Kenji Iwana, Seiichi Uchida

* 8 pages, 4 figures

**Click to Read Paper**

Robust Covariate Shift Prediction with General Losses and Feature Views

Dec 28, 2017

Anqi Liu, Brian D. Ziebart

Dec 28, 2017

Anqi Liu, Brian D. Ziebart

**Click to Read Paper**

Eye In-Painting with Exemplar Generative Adversarial Networks

Dec 11, 2017

Brian Dolhansky, Cristian Canton Ferrer

Dec 11, 2017

Brian Dolhansky, Cristian Canton Ferrer

**Click to Read Paper**

* 5 pages, 1 figure

**Click to Read Paper**

Dynamic Clustering Algorithms via Small-Variance Analysis of Markov Chain Mixture Models

Jul 26, 2017

Trevor Campbell, Brian Kulis, Jonathan How

Bayesian nonparametrics are a class of probabilistic models in which the model size is inferred from data. A recently developed methodology in this field is small-variance asymptotic analysis, a mathematical technique for deriving learning algorithms that capture much of the flexibility of Bayesian nonparametric inference algorithms, but are simpler to implement and less computationally expensive. Past work on small-variance analysis of Bayesian nonparametric inference algorithms has exclusively considered batch models trained on a single, static dataset, which are incapable of capturing time evolution in the latent structure of the data. This work presents a small-variance analysis of the maximum a posteriori filtering problem for a temporally varying mixture model with a Markov dependence structure, which captures temporally evolving clusters within a dataset. Two clustering algorithms result from the analysis: D-Means, an iterative clustering algorithm for linearly separable, spherical clusters; and SD-Means, a spectral clustering algorithm derived from a kernelized, relaxed version of the clustering problem. Empirical results from experiments demonstrate the advantages of using D-Means and SD-Means over contemporary clustering algorithms, in terms of both computational cost and clustering accuracy.
Jul 26, 2017

Trevor Campbell, Brian Kulis, Jonathan How

* 27 pages

**Click to Read Paper**

Bayesian Non-parametric model to Target Gamification Notifications Using Big Data

Nov 04, 2016

Meisam Hejazi Nia, Brian Ratchford

Nov 04, 2016

Meisam Hejazi Nia, Brian Ratchford

**Click to Read Paper**

Tracking Switched Dynamic Network Topologies from Information Cascades

Jun 28, 2016

Brian Baingana, Georgios B. Giannakis

Contagions such as the spread of popular news stories, or infectious diseases, propagate in cascades over dynamic networks with unobservable topologies. However, "social signals" such as product purchase time, or blog entry timestamps are measurable, and implicitly depend on the underlying topology, making it possible to track it over time. Interestingly, network topologies often "jump" between discrete states that may account for sudden changes in the observed signals. The present paper advocates a switched dynamic structural equation model to capture the topology-dependent cascade evolution, as well as the discrete states driving the underlying topologies. Conditions under which the proposed switched model is identifiable are established. Leveraging the edge sparsity inherent to social networks, a recursive $\ell_1$-norm regularized least-squares estimator is put forth to jointly track the states and network topologies. An efficient first-order proximal-gradient algorithm is developed to solve the resulting optimization problem. Numerical experiments on both synthetic data and real cascades measured over the span of one year are conducted, and test results corroborate the efficacy of the advocated approach.
Jun 28, 2016

Brian Baingana, Georgios B. Giannakis

**Click to Read Paper**