Models, code, and papers for "Renato A":

Basic concepts and tools for the Toki Pona minimal and constructed language: description of the language and main issues; analysis of the vocabulary; text synthesis and syntax highlighting; Wordnet synsets

Jul 04, 2018
Renato Fabbri

A minimal constructed language (conlang) is useful for experiments and comfortable for making tools. The Toki Pona (TP) conlang is minimal both in the vocabulary (with only 14 letters and 124 lemmas) and in the (about) 10 syntax rules. The language is useful for being a used and somewhat established minimal conlang with at least hundreds of fluent speakers. This article exposes current concepts and resources for TP, and makes available Python (and Vim) scripted routines for the analysis of the language, synthesis of texts, syntax highlighting schemes, and the achievement of a preliminary TP Wordnet. Focus is on the analysis of the basic vocabulary, as corpus analyses were found. The synthesis is based on sentence templates, relates to context by keeping track of used words, and renders larger texts by using a fixed number of phonemes (e.g. for poems) and number of sentences, words and letters (e.g. for paragraphs). Syntax highlighting reflects morphosyntactic classes given in the official dictionary and different solutions are described and implemented in the well-established Vim text editor. The tentative TP Wordnet is made available in three patterns of relations between synsets and word lemmas. In summary, this text holds potentially novel conceptualizations about, and tools and results in analyzing, synthesizing and syntax highlighting the TP language.

* Python and Vim scripts in this repository: https://github.com/ttm/prv/ 

  Click for Model/Code and Paper
Enhancements of linked data expressiveness for ontologies

Oct 27, 2017
Renato Fabbri

The semantic web has received many contributions of researchers as ontologies which, in this context, i.e. within RDF linked data, are formalized conceptualizations that might use different protocols, such as RDFS, OWL DL and OWL FULL. In this article, we describe new expressive techniques which were found necessary after elaborating dozens of OWL ontologies for the scientific academy, the State and the civil society. They consist in: 1) stating possible uses a property might have without incurring into axioms or restrictions; 2) assigning a level of priority for an element (class, property, triple); 3) correct depiction in diagrams of relations between classes, between individuals which are imperative, and between individuals which are optional; 4) a convenient association between OWL classes and SKOS concepts. We propose specific rules to accomplish these enhancements and exemplify both its use and the difficulties that arise because these techniques are currently not established as standards to the ontology designer.

* Anais do XX ENMC - Encontro Nacional de Modelagem Computacional e VIII ECTM - Encontro de Ci\^encias e Tecnologia de Materiais, Nova Friburgo, RJ - 16 a 19 Outubro 2017 

  Click for Model/Code and Paper
A Region Based Easy Path Wavelet Transform For Sparse Image Representation

Feb 15, 2017
Renato Budinich

The Easy Path Wavelet Transform is an adaptive transform for bivariate functions (in particular natural images) which has been proposed in [1]. It provides a sparse representation by finding a path in the domain of the function leveraging the local correlations of the function values. It then applies a one dimensional wavelet transform to the obtained vector, decimates the points and iterates the procedure. The main drawback of such method is the need to store, for each level of the transform, the path which vectorizes the two dimensional data. Here we propose a variation on the method which consists of firstly applying a segmentation procedure to the function domain, partitioning it into regions where the variation in the function values is low; in a second step, inside each such region, a path is found in some deterministic way, i.e. not data-dependent. This circumvents the need to store the paths at each level, while still obtaining good quality lossy compression. This method is particularly well suited to encode a Region of Interest in the image with different quality than the rest of the image. [1] Gerlind Plonka. The easy path wavelet transform: A new adaptive wavelet transform for sparse representation of two-dimensional data. Multiscale Modeling & Simulation, 7(3):1474$-$1496, 2008.

* Fixed use of HaarPSI - see Figure 9 

  Click for Model/Code and Paper
A Tree-based Dictionary Learning Framework

Sep 07, 2019
Renato Budinich, Gerlind Plonka

We propose a new outline for dictionary learning methods based on a hierarchical clustering of the training data. Through recursive application of a clustering method the data is organized into a binary partition tree representing a multiscale structure. The dictionary atoms are defined adaptively based on the data clusters in the partition tree. This approach can be interpreted as a generalization of the wavelet transform. The computational bottleneck of the procedure is then the chosen clustering method: when using K-means the method runs much faster than K-SVD. Thanks to the multiscale properties of the partition tree, our dictionary is structured: when using Orthogonal Matching Pursuit to reconstruct patches from a natural image, dictionary atoms corresponding to nodes being closer to the root node in the tree have a tendency to be used with greater coefficients.


  Click for Model/Code and Paper
3D Reconstruction of Incomplete Archaeological Objects Using a Generative Adversarial Network

Mar 10, 2018
Renato Hermoza, Ivan Sipiran

We introduce a data-driven approach to aid the repairing and conservation of archaeological objects: ORGAN, an object reconstruction generative adversarial network (GAN). By using an encoder-decoder 3D deep neural network on a GAN architecture, and combining two loss objectives: a completion loss and an Improved Wasserstein GAN loss, we can train a network to effectively predict the missing geometry of damaged objects. As archaeological objects can greatly differ between them, the network is conditioned on a variable, which can be a culture, a region or any metadata of the object. In our results, we show that our method can recover most of the information from damaged objects, even in cases where more than half of the voxels are missing, without producing many errors.

* 6 pages, 10 figures 

  Click for Model/Code and Paper
DeepArchitect: Automatically Designing and Training Deep Architectures

Apr 28, 2017
Renato Negrinho, Geoff Gordon

In deep learning, performance is strongly affected by the choice of architecture and hyperparameters. While there has been extensive work on automatic hyperparameter optimization for simple spaces, complex spaces such as the space of deep architectures remain largely unexplored. As a result, the choice of architecture is done manually by the human expert through a slow trial and error process guided mainly by intuition. In this paper we describe a framework for automatically designing and training deep models. We propose an extensible and modular language that allows the human expert to compactly represent complex search spaces over architectures and their hyperparameters. The resulting search spaces are tree-structured and therefore easy to traverse. Models can be automatically compiled to computational graphs once values for all hyperparameters have been chosen. We can leverage the structure of the search space to introduce different model search algorithms, such as random search, Monte Carlo tree search (MCTS), and sequential model-based optimization (SMBO). We present experiments comparing the different algorithms on CIFAR-10 and show that MCTS and SMBO outperform random search. In addition, these experiments show that our framework can be used effectively for model discovery, as it is possible to describe expressive search spaces and discover competitive models without much effort from the human expert. Code for our framework and experiments has been made publicly available.

* 12 pages, 10 figures. Code available at https://github.com/negrinho/deep_architect. See http://www.cs.cmu.edu/~negrinho/ for more info. In submission to ICCV 2017 

  Click for Model/Code and Paper
A survey on feature weighting based K-Means algorithms

Sep 22, 2015
Renato Cordeiro de Amorim

In a real-world data set there is always the possibility, rather high in our opinion, that different features may have different degrees of relevance. Most machine learning algorithms deal with this fact by either selecting or deselecting features in the data preprocessing phase. However, we maintain that even among relevant features there may be different degrees of relevance, and this should be taken into account during the clustering process. With over 50 years of history, K-Means is arguably the most popular partitional clustering algorithm there is. The first K-Means based clustering algorithm to compute feature weights was designed just over 30 years ago. Various such algorithms have been designed since but there has not been, to our knowledge, a survey integrating empirical evidence of cluster recovery ability, common flaws, and possible directions for future research. This paper elaborates on the concept of feature weighting and addresses these issues by critically analysing some of the most popular, or innovative, feature weighting mechanisms based in K-Means.

* Journal of Classification (to appear) 

  Click for Model/Code and Paper
Reinforcement Learning for the Soccer Dribbling Task

May 28, 2013
Arthur Carvalho, Renato Oliveira

We propose a reinforcement learning solution to the \emph{soccer dribbling task}, a scenario in which a soccer agent has to go from the beginning to the end of a region keeping possession of the ball, as an adversary attempts to gain possession. While the adversary uses a stationary policy, the dribbler learns the best action to take at each decision point. After defining meaningful variables to represent the state space, and high-level macro-actions to incorporate domain knowledge, we describe our application of the reinforcement learning algorithm \emph{Sarsa} with CMAC for function approximation. Our experiments show that, after the training period, the dribbler is able to accomplish its task against a strong adversary around 58% of the time.


  Click for Model/Code and Paper
Visualization of High-dimensional Scalar Functions Using Principal Parameterizations

Sep 11, 2018
Rafael Ballester-Ripoll, Renato Pajarola

Insightful visualization of multidimensional scalar fields, in particular parameter spaces, is key to many fields in computational science and engineering. We propose a principal component-based approach to visualize such fields that accurately reflects their sensitivity to input parameters. The method performs dimensionality reduction on the vast $L^2$ Hilbert space formed by all possible partial functions (i.e., those defined by fixing one or more input parameters to specific values), which are projected to low-dimensional parameterized manifolds such as 3D curves, surfaces, and ensembles thereof. Our mapping provides a direct geometrical and visual interpretation in terms of Sobol's celebrated method for variance-based sensitivity analysis. We furthermore contribute a practical realization of the proposed method by means of tensor decomposition, which enables accurate yet interactive integration and multilinear principal component analysis of high-dimensional models.


  Click for Model/Code and Paper
Contextual Search via Intrinsic Volumes

May 17, 2018
Renato Paes Leme, Jon Schneider

We study the problem of contextual search, a multidimensional generalization of binary search that captures many problems in contextual decision-making. In contextual search, a learner is trying to learn the value of a hidden vector $v \in [0,1]^d$. Every round the learner is provided an adversarially-chosen context $u_t \in \mathbb{R}^d$, submits a guess $p_t$ for the value of $\langle u_t, v\rangle$, learns whether $p_t < \langle u_t, v\rangle$, and incurs loss $\ell(\langle u_t, v\rangle, p_t)$ (for some loss function $\ell$). The learner's goal is to minimize their total loss over the course of $T$ rounds. We present an algorithm for the contextual search problem for the symmetric loss function $\ell(\theta, p) = |\theta - p|$ that achieves $O_{d}(1)$ total loss. We present a new algorithm for the dynamic pricing problem (which can be realized as a special case of the contextual search problem) that achieves $O_{d}(\log \log T)$ total loss, improving on the previous best known upper bounds of $O_{d}(\log T)$ and matching the known lower bounds (up to a polynomial dependence on $d$). Both algorithms make significant use of ideas from the field of integral geometry, most notably the notion of intrinsic volumes of a convex set. To the best of our knowledge this is the first application of intrinsic volumes to algorithm design.


  Click for Model/Code and Paper
A Simple Text Analytics Model To Assist Literary Criticism: comparative approach and example on James Joyce against Shakespeare and the Bible

Oct 24, 2017
Renato Fabbri, Luis Henrique Garcia

Literary analysis, criticism or studies is a largely valued field with dedicated journals and researchers which remains mostly within the humanities scope. Text analytics is the computer-aided process of deriving information from texts. In this article we describe a simple and generic model for performing literary analysis using text analytics. The method relies on statistical measures of: 1) token and sentence sizes and 2) Wordnet synset features. These measures are then used in Principal Component Analysis where the texts to be analyzed are observed against Shakespeare and the Bible, regarded as reference literature. The model is validated by analyzing selected works from James Joyce (1882-1941), one of the most important writers of the 20th century. We discuss the consistency of this approach, the reasons why we did not use other techniques (e.g. part-of-speech tagging) and the ways by which the analysis model might be adapted and enhanced.

* Anais do XX ENMC - Encontro Nacional de Modelagem Computacional e VIII ECTM - Encontro de Ci\^encias e Tecnologia de Materiais, Nova Friburgo, RJ - 16 a 19 Outubro 2017 
* Scripts and corpus in https://github.com/ttm/joyce 

  Click for Model/Code and Paper
Electre Tri-Machine Learning Approach to the Record Linkage Problem

May 25, 2015
Renato De Leone, Valentina Minnetti

In this short paper, the Electre Tri-Machine Learning Method, generally used to solve ordinal classification problems, is proposed for solving the Record Linkage problem. Preliminary experimental results show that, using the Electre Tri method, high accuracy can be achieved and more than 99% of the matches and nonmatches were correctly identified by the procedure.


  Click for Model/Code and Paper
Recent advances in deep learning applied to skin cancer detection

Dec 06, 2019
Andre G. C. Pacheco, Renato A. Krohling

Skin cancer is a major public health problem around the world. Its early detection is very important to increase patient prognostics. However, the lack of qualified professionals and medical instruments are significant issues in this field. In this context, over the past few years, deep learning models applied to automated skin cancer detection have become a trend. In this paper, we present an overview of the recent advances reported in this field as well as a discussion about the challenges and opportunities for improvement in the current models. In addition, we also present some important aspects regarding the use of these models in smartphones and indicate future directions we believe the field will take.

* Paper accepted in the Retrospectives Workshop @ NeurIPS 2019 

  Click for Model/Code and Paper
The impact of patient clinical information on automated skin cancer detection

Sep 16, 2019
Andre G. C. Pacheco, Renato A. Krohling

Skin cancer is one of the most common types of cancer around the world. For this reason, over the past years, different approaches have been proposed to assist detect it. Nonetheless, most of them are based only on dermoscopy images and do not take into account the patient clinical information. In this work, first, we present a new dataset that contains clinical images, acquired from smartphones, and patient clinical information of the skin lesions. Next, we introduce a straightforward approach to combine the clinical data and the images using different well-known deep learning models. These models are applied to the presented dataset using only the images and combining them with the patient clinical information. We present a comprehensive study to show the impact of the clinical data on the final predictions. The results obtained by combining both sets of information show a general improvement of around 7% in the balanced accuracy for all models. In addition, the statistical test indicates significant differences between the models with and without considering both data. The improvement achieved shows the potential of using patient clinical information in skin cancer detection and indicates that this piece of information is important to leverage skin cancer detection systems.


  Click for Model/Code and Paper
Neuroflight: Next Generation Flight Control Firmware

Jan 19, 2019
William Koch, Renato Mancuso, Azer Bestavros

Little innovation has been made to low-level attitude flight control used by unmanned aerial vehicles, which still predominantly uses the classical PID controller. In this work we introduce Neuroflight, the first open source neuro-flight controller firmware. We present our toolchain for training a neural network in simulation and compiling it to run on embedded hardware. Challenges faced jumping from simulation to reality are discussed along with our solutions. Our evaluation shows the neural network can execute at over 2.67kHz on an Arm Cortex-M7 processor and flight tests demonstrate a quadcopter running Neuroflight can achieve stable flight and execute aerobatic maneuvers.


  Click for Model/Code and Paper
Fleet Prognosis with Physics-informed Recurrent Neural Networks

Jan 16, 2019
Renato Giorgiani Nascimento, Felipe A. C. Viana

Services and warranties of large fleets of engineering assets is a very profitable business. The success of companies in that area is often related to predictive maintenance driven by advanced analytics. Therefore, accurate modeling, as a way to understand how the complex interactions between operating conditions and component capability define useful life, is key for services profitability. Unfortunately, building prognosis models for large fleets is a daunting task as factors such as duty cycle variation, harsh environments, inadequate maintenance, and problems with mass production can lead to large discrepancies between designed and observed useful lives. This paper introduces a novel physics-informed neural network approach to prognosis by extending recurrent neural networks to cumulative damage models. We propose a new recurrent neural network cell designed to merge physics-informed and data-driven layers. With that, engineers and scientists have the chance to use physics-informed layers to model parts that are well understood (e.g., fatigue crack growth) and use data-driven layers to model parts that are poorly characterized (e.g., internal loads). A simple numerical experiment is used to present the main features of the proposed physics-informed recurrent neural network for damage accumulation. The test problem consist of predicting fatigue crack length for a synthetic fleet of airplanes subject to different mission mixes. The model is trained using full observation inputs (far-field loads) and very limited observation of outputs (crack length at inspection for only a portion of the fleet). The results demonstrate that our proposed hybrid physics-informed recurrent neural network is able to accurately model fatigue crack growth even when the observed distribution of crack length does not match with the (unobservable) fleet distribution.

* Data and codes (including our implementation for both the multi-layer perceptron, the stress intensity and Paris law layers, the cumulative damage cell, as well as python driver scripts) used in this manuscript are publicly available on GitHub at https://github.com/PML-UCF/pinn. The data and code are released under the MIT License 

  Click for Model/Code and Paper
An efficient density-based clustering algorithm using reverse nearest neighbour

Nov 19, 2018
Stiphen Chowdhury, Renato Cordeiro de Amorim

Density-based clustering is the task of discovering high-density regions of entities (clusters) that are separated from each other by contiguous regions of low-density. DBSCAN is, arguably, the most popular density-based clustering algorithm. However, its cluster recovery capabilities depend on the combination of the two parameters. In this paper we present a new density-based clustering algorithm which uses reverse nearest neighbour (RNN) and has a single parameter. We also show that it is possible to estimate a good value for this parameter using a clustering validity index. The RNN queries enable our algorithm to estimate densities taking more than a single entity into account, and to recover clusters that are not well-separated or have different densities. Our experiments on synthetic and real-world data sets show our proposed algorithm outperforms DBSCAN and its recent variant ISDBSCAN.

* Accepted in: Computing Conference 2019 in London, UK. http://saiconference.com/Computing 

  Click for Model/Code and Paper
Ranking of classification algorithms in terms of mean-standard deviation using A-TOPSIS

Oct 22, 2016
Andre G. C. Pacheco, Renato A. Krohling

In classification problems when multiples algorithms are applied to different benchmarks a difficult issue arises, i.e., how can we rank the algorithms? In machine learning it is common run the algorithms several times and then a statistic is calculated in terms of means and standard deviations. In order to compare the performance of the algorithms, it is very common to employ statistical tests. However, these tests may also present limitations, since they consider only the means and not the standard deviations of the obtained results. In this paper, we present the so called A-TOPSIS, based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), to solve the problem of ranking and comparing classification algorithms in terms of means and standard deviations. We use two case studies to illustrate the A-TOPSIS for ranking classification algorithms and the results show the suitability of A-TOPSIS to rank the algorithms. The presented approach is general and can be applied to compare the performance of stochastic algorithms in machine learning. Finally, to encourage researchers to use the A-TOPSIS for ranking algorithms we also presented in this work an easy-to-use A-TOPSIS web framework.

* 16 pages, 8 figures and 11 tables 

  Click for Model/Code and Paper
Recovering the number of clusters in data sets with noise features using feature rescaling factors

Feb 22, 2016
Renato Cordeiro de Amorim, Christian Hennig

In this paper we introduce three methods for re-scaling data sets aiming at improving the likelihood of clustering validity indexes to return the true number of spherical Gaussian clusters with additional noise features. Our method obtains feature re-scaling factors taking into account the structure of a given data set and the intuitive idea that different features may have different degrees of relevance at different clusters. We experiment with the Silhouette (using squared Euclidean, Manhattan, and the p$^{th}$ power of the Minkowski distance), Dunn's, Calinski-Harabasz and Hartigan indexes on data sets with spherical Gaussian clusters with and without noise features. We conclude that our methods indeed increase the chances of estimating the true number of clusters in a data set.

* Information Sciences 324 (2015), 126-145 

  Click for Model/Code and Paper
Efficient Hill-Climber for Multi-Objective Pseudo-Boolean Optimization

Jan 27, 2016
Francisco Chicano, Darrell Whitley, Renato Tinos

Local search algorithms and iterated local search algorithms are a basic technique. Local search can be a stand along search methods, but it can also be hybridized with evolutionary algorithms. Recently, it has been shown that it is possible to identify improving moves in Hamming neighborhoods for k-bounded pseudo-Boolean optimization problems in constant time. This means that local search does not need to enumerate neighborhoods to find improving moves. It also means that evolutionary algorithms do not need to use random mutation as a operator, except perhaps as a way to escape local optima. In this paper, we show how improving moves can be identified in constant time for multiobjective problems that are expressed as k-bounded pseudo-Boolean functions. In particular, multiobjective forms of NK Landscapes and Mk Landscapes are considered.

* Paper accepted for publication in the 16th European Conference on Evolutionary Computation for Combinatorial Optimisation (EvoCOP 2016) 

  Click for Model/Code and Paper