Models, code, and papers for "H":

Game Information System

Jun 10, 2010
Spits Warnars H. L. H

In this Information system age many organizations consider information system as their weapon to compete or gain competitive advantage or give the best services for non profit organizations. Game Information System as combining Information System and game is breakthrough to achieve organizations' performance. The Game Information System will run the Information System with game and how game can be implemented to run the Information System. Game is not only for fun and entertainment, but will be a challenge to combine fun and entertainment with Information System. The Challenge to run the information system with entertainment, deliver the entertainment with information system all at once. Game information system can be implemented in many sectors as like the information system itself but in difference's view. A view of game which people can joy and happy and do their transaction as a fun things.

* International Journal of Computer Science and Information Technology 2.3 (2010) 135-148 
* 14 pages 

  Click for Model/Code and Paper
Detecting Vietnamese Opinion Spam

May 09, 2019
T. H. H Duong, T. D. Vu, V. M. Ngo

Recently, Vietnamese Natural Language Processing has been researched by experts in academic and business. However, the existing papers have been focused only on information classification or extraction from documents. Nowadays, with quickly development of the e-commerce websites, forums and social networks, the products, people, organizations or wonders are targeted of comments or reviews of the network communities. Many people often use that reviews to make their decision on something. Whereas, there are many people or organizations use the reviews to mislead readers. Therefore, it is so necessary to detect those bad behaviors in reviews. In this paper, we research this problem and propose an appropriate method for detecting Vietnamese reviews being spam or non-spam. The accuracy of our method is up to 90%.

* ICTFIT 2012 
* 6 pages, in Vietnamese 

  Click for Model/Code and Paper
Consistency Analysis for the Doubly Stochastic Dirichlet Process

May 24, 2016
Xing Sun, Nelson H. C. Yung, Edmund Y. Lam, Hayden K. -H. So

This technical report proves components consistency for the Doubly Stochastic Dirichlet Process with exponential convergence of posterior probability. We also present the fundamental properties for DSDP as well as inference algorithms. Simulation toy experiment and real-world experiment results for single and multi-cluster also support the consistency proof. This report is also a support document for the paper "Computationally Efficient Hyperspectral Data Learning Based on the Doubly Stochastic Dirichlet Process".

* 13 pages, 4 figures 

  Click for Model/Code and Paper
Generalized Statistical Tests for mRNA and Protein Subcellular Spatial Patterning against Complete Spatial Randomness

Apr 10, 2016
Jonathan H. Warrell, Anca F. Savulescu, Robyn Brackin, Musa M. Mhlanga

We derive generalized estimators for a number of spatial statistics that have been used in the analysis of spatially resolved omics data, such as Ripley's K, H and L functions, clustering index, and degree of clustering, which allow these statistics to be calculated on data modelled by arbitrary random measures (RMs). Our estimators generalize those typically used to calculate these statistics on point process data, allowing them to be calculated on RMs which assign continuous values to spatial regions, for instance to model protein intensity. The clustering index (H*) compares Ripley's H function calculated empirically to its distribution under complete spatial randomness (CSR), leading us to consider CSR null hypotheses for RMs which are not point-processes when generalizing this statistic. We thus consider restricted classes of completely random measures which can be simulated directly (Gamma processes and Marked Poisson Processes), as well as the general class of all CSR RMs, for which we derive an exact permutation-based H* estimator. We establish several properties of the estimators, including bounds on the accuracy of our general Ripley K estimator, its relationship to a previous estimator for the cross-correlation measure, and the relationship of our generalized H* estimator to previous statistics. To test the ability of our approach to identify spatial patterning, we use Fluorescent In Situ Hybridization (FISH) and Immunofluorescence (IF) data to probe for mRNA and protein subcellular localization patterns respectively in polarizing mouse fibroblasts on micropattened cells. We observe correlated patterns of clustering over time for corresponding mRNAs and proteins, suggesting a deterministic effect of mRNA localization on protein localization for several pairs tested, including one case in which spatial patterning at the mRNA level has not been previously demonstrated.

* 17 pages, 2 figures 

  Click for Model/Code and Paper
Dyna-H: a heuristic planning reinforcement learning algorithm applied to role-playing-game strategy decision systems

Jul 30, 2011
Matilde Santos, Jose Antonio Martin H., Victoria Lopez, Guillermo Botella

In a Role-Playing Game, finding optimal trajectories is one of the most important tasks. In fact, the strategy decision system becomes a key component of a game engine. Determining the way in which decisions are taken (online, batch or simulated) and the consumed resources in decision making (e.g. execution time, memory) will influence, in mayor degree, the game performance. When classical search algorithms such as A* can be used, they are the very first option. Nevertheless, such methods rely on precise and complete models of the search space, and there are many interesting scenarios where their application is not possible. Then, model free methods for sequential decision making under uncertainty are the best choice. In this paper, we propose a heuristic planning strategy to incorporate the ability of heuristic-search in path-finding into a Dyna agent. The proposed Dyna-H algorithm, as A* does, selects branches more likely to produce outcomes than other branches. Besides, it has the advantages of being a model-free online reinforcement learning algorithm. The proposal was evaluated against the one-step Q-Learning and Dyna-Q algorithms obtaining excellent experimental results: Dyna-H significantly overcomes both methods in all experiments. We suggest also, a functional analogy between the proposed sampling from worst trajectories heuristic and the role of dreams (e.g. nightmares) in human behavior.

  Click for Model/Code and Paper
A Fingerprint-based Access Control using Principal Component Analysis and Edge Detection

Feb 06, 2015
E. F. Melo, H. M. de Oliveira

This paper presents a novel approach for deciding on the appropriateness or not of an acquired fingerprint image into a given database. The process begins with the assembly of a training base in an image space constructed by combining Principal Component Analysis (PCA) and edge detection. Then, the parameter H, a new feature that helps in the decision making about the relevance of a fingerprint image in databases, is derived from a relationship between Euclidean and Mahalanobian distances. This procedure ends with the lifting of the curve of the Receiver Operating Characteristic (ROC), where the thresholds defined on the parameter H are chosen according to the acceptable rates of false positives and false negatives.

* 5 pages, 9 figures. SBrT/IEEE International Telecommunication Symposium, ITS 2010, Manaus, AM, Brazil 

  Click for Model/Code and Paper
SLAMBench2: Multi-Objective Head-to-Head Benchmarking for Visual SLAM

Aug 21, 2018
Bruno Bodin, Harry Wagstaff, Sajad Saeedi, Luigi Nardi, Emanuele Vespa, John H Mayer, Andy Nisbet, Mikel Luján, Steve Furber, Andrew J Davison, Paul H. J. Kelly, Michael O'Boyle

SLAM is becoming a key component of robotics and augmented reality (AR) systems. While a large number of SLAM algorithms have been presented, there has been little effort to unify the interface of such algorithms, or to perform a holistic comparison of their capabilities. This is a problem since different SLAM applications can have different functional and non-functional requirements. For example, a mobile phonebased AR application has a tight energy budget, while a UAV navigation system usually requires high accuracy. SLAMBench2 is a benchmarking framework to evaluate existing and future SLAM systems, both open and close source, over an extensible list of datasets, while using a comparable and clearly specified list of performance metrics. A wide variety of existing SLAM algorithms and datasets is supported, e.g. ElasticFusion, InfiniTAM, ORB-SLAM2, OKVIS, and integrating new ones is straightforward and clearly specified by the framework. SLAMBench2 is a publicly-available software framework which represents a starting point for quantitative, comparable and validatable experimental research to investigate trade-offs across SLAM systems.

* 2018 IEEE International Conference on Robotics and Automation (ICRA'18) 

  Click for Model/Code and Paper
Dreem Open Datasets: Multi-Scored Sleep Datasets to compare Human and Automated sleep staging

Nov 12, 2019
Antoine Guillot, Fabien Sauvet, Emmanuel H During, Valentin Thorey

Sleep stage classification constitutes an important element of sleep disorder diagnosis. It relies on the visual inspection of polysomnography records by trained sleep technologists. Automated approaches have been designed to alleviate this resource-intensive task. However, such approaches are usually compared to a single human scorer annotation despite an inter-rater agreement of about 85 % only. The present study introduces two publicly-available datasets, DOD-H including 25 healthy volunteers and DOD-O including 55 patients suffering from obstructive sleep apnea (OSA). Both datasets have been scored by 5 sleep technologists from different sleep centers. We developed a framework to compare automated approaches to a consensus of multiple human scorers. Using this framework, we benchmarked and compared the main literature approaches. We also developed and benchmarked a new deep learning method, SimpleSleepNet, inspired by current state-of-the-art. We demonstrated that many methods can reach human-level performance on both datasets. SimpleSleepNet achieved an F1 of 89.9 % vs 86.8 % on average for human scorers on DOD-H, and an F1 of 88.3 % vs 84.8 % on DOD-O. Our study highlights that using state-of-the-art automated sleep staging outperforms human scorers performance for healthy volunteers and patients suffering from OSA. Consideration could be made to use automated approaches in the clinical setting.

* 10 pages, journal submitted 

  Click for Model/Code and Paper
Optimal Query Complexity for Reconstructing Hypergraphs

Jan 03, 2010
Nader H. Bshouty, Hanna Mazzawi

In this paper we consider the problem of reconstructing a hidden weighted hypergraph of constant rank using additive queries. We prove the following: Let $G$ be a weighted hidden hypergraph of constant rank with n vertices and $m$ hyperedges. For any $m$ there exists a non-adaptive algorithm that finds the edges of the graph and their weights using $$ O(\frac{m\log n}{\log m}) $$ additive queries. This solves the open problem in [S. Choi, J. H. Kim. Optimal Query Complexity Bounds for Finding Graphs. {\em STOC}, 749--758,~2008]. When the weights of the hypergraph are integers that are less than $O(poly(n^d/m))$ where $d$ is the rank of the hypergraph (and therefore for unweighted hypergraphs) there exists a non-adaptive algorithm that finds the edges of the graph and their weights using $$ O(\frac{m\log \frac{n^d}{m}}{\log m}). $$ additive queries. Using the information theoretic bound the above query complexities are tight.

  Click for Model/Code and Paper
Deep Features for Tissue-Fold Detection in Histopathology Images

Mar 17, 2019
Morteza Babaie, H. R. Tizhoosh

Whole slide imaging (WSI) refers to the digitization of a tissue specimen which enables pathologists to explore high-resolution images on a monitor rather than through a microscope. The formation of tissue folds occur during tissue processing. Their presence may not only cause out-of-focus digitization but can also negatively affect the diagnosis in some cases. In this paper, we have compared five pre-trained convolutional neural networks (CNNs) of different depths as feature extractors to characterize tissue folds. We have also explored common classifiers to discriminate folded tissue against the normal tissue in hematoxylin and eosin (H\&E) stained biopsy samples. In our experiments, we manually select the folded area in roughly 2.5mm $\times$ 2.5mm patches at $20$x magnification level as the training data. The ``DenseNet'' with 201 layers alongside an SVM classifier outperformed all other configurations. Based on the leave-one-out validation strategy, we achieved $96.3\%$ accuracy, whereas with augmentation the accuracy increased to $97.2\%$. We have tested the generalization of our method with five unseen WSIs from the NIH (National Cancer Institute) dataset. The accuracy for patch-wise detection was $81\%$. One folded patch within an image suffices to flag the entire specimen for visual inspection.

* Accepted for publication in the 15th European Congress on Digital Pathology (ECDP 2019), University of Warwick, UK 

  Click for Model/Code and Paper
PDDL 2.1: Representation vs. Computation

Oct 12, 2011
H. A. Geffner

I comment on the PDDL 2.1 language and its use in the planning competition, focusing on the choices made for accommodating time and concurrency. I also discuss some methodological issues that have to do with the move toward more expressive planning languages and the balance needed in planning research between semantics and computation.

* Journal Of Artificial Intelligence Research, Volume 20, pages 139-144, 2003 

  Click for Model/Code and Paper
Gene Expression Data Knowledge Discovery using Global and Local Clustering

Mar 22, 2010
Swathi. H

To understand complex biological systems, the research community has produced huge corpus of gene expression data. A large number of clustering approaches have been proposed for the analysis of gene expression data. However, extracting important biological knowledge is still harder. To address this task, clustering techniques are used. In this paper, hybrid Hierarchical k-Means algorithm is used for clustering and biclustering gene expression data is used. To discover both local and global clustering structure biclustering and clustering algorithms are utilized. A validation technique, Figure of Merit is used to determine the quality of clustering results. Appropriate knowledge is mined from the clusters by embedding a BLAST similarity search program into the clustering and biclustering process. To discover both local and global clustering structure biclustering and clustering algorithms are utilized. To determine the quality of clustering results, a validation technique, Figure of Merit is used. Appropriate knowledge is mined from the clusters by embedding a BLAST similarity search program into the clustering and biclustering process.

* Journal of Computing, Volume 2, Issue 3, March 2010, 

  Click for Model/Code and Paper
Rock mechanics modeling based on soft granulation theory

May 29, 2008
H. Owladeghaffari

This paper describes application of information granulation theory, on the design of rock engineering flowcharts. Firstly, an overall flowchart, based on information granulation theory has been highlighted. Information granulation theory, in crisp (non-fuzzy) or fuzzy format, can take into account engineering experiences (especially in fuzzy shape-incomplete information or superfluous), or engineering judgments, in each step of designing procedure, while the suitable instruments modeling are employed. In this manner and to extension of soft modeling instruments, using three combinations of Self Organizing Map (SOM), Neuro-Fuzzy Inference System (NFIS), and Rough Set Theory (RST) crisp and fuzzy granules, from monitored data sets are obtained. The main underlined core of our algorithms are balancing of crisp(rough or non-fuzzy) granules and sub fuzzy granules, within non fuzzy information (initial granulation) upon the open-close iterations. Using different criteria on balancing best granules (information pockets), are obtained. Validations of our proposed methods, on the data set of in-situ permeability in rock masses in Shivashan dam, Iran have been highlighted.

  Click for Model/Code and Paper
Toward Fuzzy block theory

May 15, 2008
H. Owladeghaffari

This study, fundamentals of fuzzy block theory, and its application in assessment of stability in underground openings, has surveyed. Using fuzzy topics and inserting them in to key block theory, in two ways, fundamentals of fuzzy block theory has been presented. In indirect combining, by coupling of adaptive Neuro Fuzzy Inference System (NFIS) and classic block theory, we could extract possible damage parts around a tunnel. In direct solution, some principles of block theory, by means of different fuzzy facets theory, were rewritten.


  Click for Model/Code and Paper
Contact state analysis using NFIS and SOM

May 08, 2008
H. Owladeghaffari

This paper reports application of neuro- fuzzy inference system (NFIS) and self organizing feature map neural networks (SOM) on detection of contact state in a block system. In this manner, on a simple system, the evolution of contact states, by parallelization of DDA, has been investigated. So, a comparison between NFIS and SOM results has been presented. The results show applicability of the proposed methods, by different accuracy, on detection of contact's distribution.

* Proc. International Symposium on Computational Mechanics (ISCM2007), Yao ZH & Yuan MW (eds.), Beijing: Tsinghua University Press & Springer, July 30-August 1, 2007, Beijing, China, 

  Click for Model/Code and Paper
Partial Recovery of Erdős-Rényi Graph Alignment via $k$-Core Alignment

Nov 03, 2018
Daniel Cullina, Negar Kiyavash, Prateek Mittal, H. Vincent Poor

We determine information theoretic conditions under which it is possible to partially recover the alignment used to generate a pair of sparse, correlated Erd\H{o}s-R\'enyi graphs. To prove our achievability result, we introduce the $k$-core alignment estimator. This estimator searches for an alignment in which the intersection of the correlated graphs using this alignment has a minimum degree of $k$. We prove a matching converse bound. As the number of vertices grows, recovery of the alignment for a fraction of the vertices tending to one is possible when the average degree of the intersection of the graph pair tends to infinity. It was previously known that exact alignment is possible when this average degree grows faster than the logarithm of the number of vertices.

  Click for Model/Code and Paper
Individualized Time-Series Segmentation for Mining Mobile Phone User Behavior

Nov 15, 2018
Iqbal H. Sarker, Alan Colman, MA Kabir, Jun Han

Mobile phones can record individual's daily behavioral data as a time-series. In this paper, we present an effective time-series segmentation technique that extracts optimal time segments of individual's similar behavioral characteristics utilizing their mobile phone data. One of the determinants of an individual's behavior is the various activities undertaken at various times-of-the-day and days-of-the-week. In many cases, such behavior will follow temporal patterns. Currently, researchers use either equal or unequal interval-based segmentation of time for mining mobile phone users' behavior. Most of them take into account static temporal coverage of 24-h-a-day and few of them take into account the number of incidences in time-series data. However, such segmentations do not necessarily map to the patterns of individual user activity and subsequent behavior because of not taking into account the diverse behaviors of individuals over time-of-the-week. Therefore, we propose a behavior-oriented time segmentation (BOTS) technique that takes into account not only the temporal coverage of the week but also the number of incidences of diverse behaviors dynamically for producing similar behavioral time segments over the week utilizing time-series data. Experiments on the real mobile phone datasets show that our proposed segmentation technique better captures the user's dominant behavior at various times-of-the-day and days-of-the-week enabling the generation of high confidence temporal rules in order to mine individual mobile phone users' behavior.

* The Computer Journal, Section C: Computational Intelligence, Machine Learning and Data Analytics, Publisher: Oxford University, UK, 2017 
* 20 pages 

  Click for Model/Code and Paper
Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science

Jun 20, 2018
Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H. Nguyen, Madeleine Gibescu, Antonio Liotta

Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erd\H{o}s-R\'enyi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.

* Nature Communications, 2018 
* 18 pages 

  Click for Model/Code and Paper
Deep learning-based assessment of tumor-associated stroma for diagnosing breast cancer in histopathology images

Feb 19, 2017
Babak Ehteshami Bejnordi, Jimmy Linz, Ben Glass, Maeve Mullooly, Gretchen L Gierach, Mark E Sherman, Nico Karssemeijer, Jeroen van der Laak, Andrew H Beck

Diagnosis of breast carcinomas has so far been limited to the morphological interpretation of epithelial cells and the assessment of epithelial tissue architecture. Consequently, most of the automated systems have focused on characterizing the epithelial regions of the breast to detect cancer. In this paper, we propose a system for classification of hematoxylin and eosin (H&E) stained breast specimens based on convolutional neural networks that primarily targets the assessment of tumor-associated stroma to diagnose breast cancer patients. We evaluate the performance of our proposed system using a large cohort containing 646 breast tissue biopsies. Our evaluations show that the proposed system achieves an area under ROC of 0.92, demonstrating the discriminative power of previously neglected tumor-associated stroma as a diagnostic biomarker.

* 5 pages, 2 figures, ISBI 2017 Submission 

  Click for Model/Code and Paper
Minimum entropy production in multipartite processes due to neighborhood constraints

Jan 07, 2020
David H Wolpert

It is known that the minimal total entropy production (EP) generated during the discrete-time evolution of a composite system is nonzero if its subsystems are isolated from one another. Minimal EP is also nonzero if the subsystems jointly implement a specified Bayes net. Here I extend these discrete-time results to continuous time, and to allow all subsystems to be simultaneously interacting. To do this I model the composite system as a multipartite process, subject to constraints on the overlaps among the "neighborhoods" of the rate matrices of the subsystems. I derive two information-theoretic lower bounds on the minimal achievable EP rate expressed in terms of those neighborhood overlaps. The first bound is based on applying the inclusion-exclusion principle to the eighborhood overlaps. The second is based on constructing counterfactual rate matrices, in which all subsystems outside of a particular neighborhood are held fixed while those inside the neighborhood are allowed to evolve. This second bound involves quantities related to the "learning rate" of stationary bipartite systems, or more generally to the "information flow".

* 11 pages 

  Click for Model/Code and Paper