Models, code, and papers for "John Burge":

Machine Learning for Precipitation Nowcasting from Radar Images

Dec 11, 2019
Shreya Agrawal, Luke Barrington, Carla Bromberg, John Burge, Cenk Gazen, Jason Hickey

High-resolution nowcasting is an essential tool needed for effective adaptation to climate change, particularly for extreme weather. As Deep Learning (DL) techniques have shown dramatic promise in many domains, including the geosciences, we present an application of DL to the problem of precipitation nowcasting, i.e., high-resolution (1 km x 1 km) short-term (1 hour) predictions of precipitation. We treat forecasting as an image-to-image translation problem and leverage the power of the ubiquitous UNET convolutional neural network. We find this performs favorably when compared to three commonly used models: optical flow, persistence and NOAA's numerical one-hour HRRR nowcasting prediction.


  Access Model/Code and Paper
Artificial Brain Based on Credible Neural Circuits in a Human Brain

Oct 04, 2010
John Robert Burger

Neurons are individually translated into simple gates to plan a brain based on human psychology and intelligence. State machines, assumed previously learned in subconscious associative memory are shown to enable equation solving and rudimentary thinking using nanoprocessing within short term memory.

* 14 pages 12 figures corrected Fig. 3 & edited 

  Access Model/Code and Paper
XOR at a Single Vertex -- Artificial Dendrites

Sep 23, 2010
John Robert Burger

New to neuroscience with implications for AI, the exclusive OR, or any other Boolean gate may be biologically accomplished within a single region where active dendrites merge. This is demonstrated below using dynamic circuit analysis. Medical knowledge aside, this observation points to the possibility of specially coated conductors to accomplish artificial dendrites.

* Edited for clarity; added Kandel reference 

  Access Model/Code and Paper
Artificial Learning in Artificial Memories

Sep 05, 2010
John Robert Burger

Memory refinements are designed below to detect those sequences of actions that have been repeated a given number n. Subsequently such sequences are permitted to run without CPU involvement. This mimics human learning. Actions are rehearsed and once learned, they are performed automatically without conscious involvement.

* 7 pages, 4 figures 

  Access Model/Code and Paper
Explaining the Logical Nature of Electrical Solitons in Neural Circuits

Apr 26, 2008
John Robert Burger

Neurons are modeled electrically based on ferroelectric membranes thin enough to permit charge transfer, conjectured to be the tunneling result of thermally energetic ions and random electrons. These membranes can be triggered to produce electrical solitons, the main signals for brain associative memory and logical processing. Dendritic circuits are modeled, and electrical solitons are simulated to demonstrate the nature of soliton propagation, soliton reflection, the collision of solitons, as well as soliton OR gates, AND gates, XOR gates and NOT gates.

* 13 pages, 16 figures 

  Access Model/Code and Paper
Automating Coreference: The Role of Annotated Training Data

Mar 02, 1998
Lynette Hirschman, Patricia Robinson, John Burger, Marc Vilain

We report here on a study of interannotator agreement in the coreference task as defined by the Message Understanding Conference (MUC-6 and MUC-7). Based on feedback from annotators, we clarified and simplified the annotation specification. We then performed an analysis of disagreement among several annotators, concluding that only 16% of the disagreements represented genuine disagreement about coreference; the remainder of the cases were mostly typographical errors or omissions, easily reconciled. Initially, we measured interannotator agreement in the low 80s for precision and recall. To try to improve upon this, we ran several experiments. In our final experiment, we separated the tagging of candidate noun phrases from the linking of actual coreferring expressions. This method shows promise - interannotator agreement climbed to the low 90s - but it needs more extensive validation. These results position the research community to broaden the coreference task to multiple languages, and possibly to different kinds of coreference.

* 4 pages, 5 figures. To appear in the AAAI Spring Symposium on Applying Machine Learning to Discourse Processing. The Alembic Workbench annotation tool described in this paper is available at http://www.mitre.org/resources/centers/advanced_info/g04h/workbench.html 

  Access Model/Code and Paper
How to Evaluate your Question Answering System Every Day and Still Get Real Work Done

Apr 17, 2000
Eric Breck, John D. Burger, Lisa Ferro, Lynette Hirschman, David House, Marc Light, Inderjeet Mani

In this paper, we report on Qaviar, an experimental automated evaluation system for question answering applications. The goal of our research was to find an automatically calculated measure that correlates well with human judges' assessment of answer correctness in the context of question answering tasks. Qaviar judges the response by computing recall against the stemmed content words in the human-generated answer key. It counts the answer correct if it exceeds agiven recall threshold. We determined that the answer correctness predicted by Qaviar agreed with the human 93% to 95% of the time. 41 question-answering systems were ranked by both Qaviar and human assessors, and these rankings correlated with a Kendall's Tau measure of 0.920, compared to a correlation of 0.956 between human assessors on the same data.

* 6 pages, 3 figures, to appear in Proceedings of the Second International Conference on Language Resources and Evaluation (LREC 2000) 

  Access Model/Code and Paper
MLPerf Inference Benchmark

Nov 06, 2019
Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, Maximilien Breughe, Mark Charlebois, William Chou, Ramesh Chukka, Cody Coleman, Sam Davis, Pan Deng, Greg Diamos, Jared Duke, Dave Fick, J. Scott Gardner, Itay Hubara, Sachin Idgunji, Thomas B. Jablin, Jeff Jiao, Tom St. John, Pankaj Kanwar, David Lee, Jeffery Liao, Anton Lokhmotov, Francisco Massa, Peng Meng, Paulius Micikevicius, Colin Osborne, Gennady Pekhimenko, Arun Tejusve Raghunath Rajan, Dilip Sequeira, Ashish Sirasao, Fei Sun, Hanlin Tang, Michael Thomson, Frank Wei, Ephrem Wu, Lingjie Xu, Koichi Yamada, Bing Yu, George Yuan, Aaron Zhong, Peizhao Zhang, Yuchen Zhou

Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and four orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad combinations of ML hardware and ML software make assessing ML-system performance in an architecture-neutral, representative, and reproducible manner challenging. There is a clear need for industry-wide standard ML benchmarking and evaluation criteria. MLPerf Inference answers that call. Driven by more than 30 organizations as well as more than 200 ML engineers and practitioners, MLPerf implements a set of rules and practices to ensure comparability across systems with wildly differing architectures. In this paper, we present the method and design principles of the initial MLPerf Inference release. The first call for submissions garnered more than 600 inference-performance measurements from 14 organizations, representing over 30 systems that show a range of capabilities.


  Access Model/Code and Paper