Models, code, and papers for "Biplav Srivastava":

Decision-support for the Masses by Enabling Conversations with Open Data

Sep 16, 2018
Biplav Srivastava

Open data refers to data that is freely available for reuse. Although there has been rapid increase in availability of open data to public in the last decade, this has not translated into better decision-support tools for them. We propose intelligent conversation generators as a grand challenge that would automatically create data-driven conversation interfaces (CIs), also known as chatbots or dialog systems, from open data and deliver personalized analytical insights to users based on their contextual needs. Such generators will not only help bring Artificial Intelligence (AI)-based solutions for important societal problems to the masses but also advance AI by providing an integrative testbed for human-centric AI and filling gaps in the state-of-art towards this aim.

* 6 pages. arXiv admin note: text overlap with arXiv:1803.09789 

  Click for Model/Code and Paper
On Chatbots Exhibiting Goal-Directed Autonomy in Dynamic Environments

Mar 26, 2018
Biplav Srivastava

Conversation interfaces (CIs), or chatbots, are a popular form of intelligent agents that engage humans in task-oriented or informal conversation. In this position paper and demonstration, we argue that chatbots working in dynamic environments, like with sensor data, can not only serve as a promising platform to research issues at the intersection of learning, reasoning, representation and execution for goal-directed autonomy; but also handle non-trivial business applications. We explore the underlying issues in the context of Water Advisor, a preliminary multi-modal conversation system that can access and explain water quality data.

* 3 pages 

  Click for Model/Code and Paper
Towards Composable Bias Rating of AI Services

Jul 31, 2018
Biplav Srivastava, Francesca Rossi

A new wave of decision-support systems are being built today using AI services that draw insights from data (like text and video) and incorporate them in human-in-the-loop assistance. However, just as we expect humans to be ethical, the same expectation needs to be met by automated systems that increasingly get delegated to act on their behalf. A very important aspect of an ethical behavior is to avoid (intended, perceived, or accidental) bias. Bias occurs when the data distribution is not representative enough of the natural phenomenon one wants to model and reason about. The possibly biased behavior of a service is hard to detect and handle if the AI service is merely being used and not developed from scratch, since the training data set is not available. In this situation, we envisage a 3rd party rating agency that is independent of the API producer or consumer and has its own set of biased and unbiased data, with customizable distributions. We propose a 2-step rating approach that generates bias ratings signifying whether the AI service is unbiased compensating, data-sensitive biased, or biased. The approach also works on composite services. We implement it in the context of text translation and report interesting results.

* 7 pages, appeared in 2018 ACM/AAAI Conference on AI Ethics and Society (AIES 2018) 

  Click for Model/Code and Paper
Estimating Train Delays in a Large Rail Network Using a Zero Shot Markov Model

Jun 07, 2018
Ramashish Gaurav, Biplav Srivastava

India runs the fourth largest railway transport network size carrying over 8 billion passengers per year. However, the travel experience of passengers is frequently marked by delays, i.e., late arrival of trains at stations, causing inconvenience. In a first, we study the systemic delays in train arrivals using n-order Markov frameworks and experiment with two regression based models. Using train running-status data collected for two years, we report on an efficient algorithm for estimating delays at railway stations with near accurate results. This work can help railways to manage their resources, while also helping passengers and businesses served by them to efficiently plan their activities.

* 9 pages 

  Click for Model/Code and Paper
A Train Status Assistant for Indian Railways

Sep 23, 2018
Himadri Mishra, Ramashish Gaurav, Biplav Srivastava

Trains are part-and-parcel of every day lives in countries with large, diverse, multi-lingual population like India. Consequently, an assistant which can accurately predict and explain train delays will help people and businesses alike. We present a novel conversation agent which can engage with people about train status and inform them about its delay at in-line stations. It is trained on past delay data from a subset of trains and generalizes to others.

* 2 pages, demonstration chatbot, learning, train delay 

  Click for Model/Code and Paper
Workflow Complexity for Collaborative Interactions: Where are the Metrics? -- A Challenge

Sep 13, 2017
Kartik Talamadupula, Biplav Srivastava, Jeffrey O. Kephart

In this paper, we introduce the problem of denoting and deriving the complexity of workflows (plans, schedules) in collaborative, planner-assisted settings where humans and agents are trying to jointly solve a task. The interactions -- and hence the workflows that connect the human and the agents -- may differ according to the domain and the kind of agents. We adapt insights from prior work in human-agent teaming and workflow analysis to suggest metrics for workflow complexity. The main motivation behind this work is to highlight metrics for human comprehensibility of plans and schedules. The planning community has seen its fair share of work on the synthesis of plans that take diversity into account -- what value do such plans hold if their generation is not guided at least in part by metrics that reflect the ease of engaging with and using those plans?

* 4 pages, 1 figure, 1 table Appeared in the ICAPS 2017 UISP Workshop 

  Click for Model/Code and Paper
A Measure for Dialog Complexity and its Application in Streamlining Service Operations

Aug 04, 2017
Q Vera Liao, Biplav Srivastava, Pavan Kapanipathi

Dialog is a natural modality for interaction between customers and businesses in the service industry. As customers call up the service provider, their interactions may be routine or extraordinary. We believe that these interactions, when seen as dialogs, can be analyzed to obtain a better understanding of customer needs and how to efficiently address them. We introduce the idea of a dialog complexity measure to characterize multi-party interactions, propose a general data-driven method to calculate it, use it to discover insights in public and enterprise dialog datasets, and demonstrate its beneficial usage in facilitating better handling of customer requests and evaluating service agents.

* 11 pages 

  Click for Model/Code and Paper
Planning with Partial Preference Models

Jan 12, 2011
Tuan Nguyen, Minh Do, Alfonso Gerevini, Ivan Serina, Biplav Srivastava, Subbarao Kambhampati

Current work in planning with preferences assume that the user's preference models are completely specified and aim to search for a single solution plan. In many real-world planning scenarios, however, the user probably cannot provide any information about her desired plans, or in some cases can only express partial preferences. In such situations, the planner has to present not only one but a set of plans to the user, with the hope that some of them are similar to the plan she prefers. We first propose the usage of different measures to capture quality of plan sets that are suitable for such scenarios: domain-independent distance measures defined based on plan elements (actions, states, causal links) if no knowledge of the user's preferences is given, and the Integrated Convex Preference measure in case the user's partial preference is provided. We then investigate various heuristic approaches to find set of plans according to these measures, and present empirical results demonstrating the promise of our approach.

* 38 pages, submitted to Artificial Intelligence Journal 

  Click for Model/Code and Paper
Tentacular Artificial Intelligence, and the Architecture Thereof, Introduced

Oct 14, 2018
Selmer Bringsjord, Naveen Sundar Govindarajulu, Atriya Sen, Matthew Peveler, Biplav Srivastava, Kartik Talamadupula

We briefly introduce herein a new form of distributed, multi-agent artificial intelligence, which we refer to as "tentacular." Tentacular AI is distinguished by six attributes, which among other things entail a capacity for reasoning and planning based in highly expressive calculi (logics), and which enlists subsidiary agents across distances circumscribed only by the reach of one or more given networks.

* FAIM Workshop on Architectures And Evaluation For Generality, Autonomy & Progress in AI July 15, 2018, Stockholm, Sweden, 1st International Workshop Held In Conjunction With IJCAI-ECAI 2018, Aamas 2018 and ICML 2018 

  Click for Model/Code and Paper
Towards Cognitive-and-Immersive Systems: Experiments in a Shared (or common) Blockworld Framework

Sep 14, 2017
Matthew Peveler, Biplav Srivastava, Kartik Talamadupula, Naveen Sundar G., Selmer Bringsjord, Hui Su

As computational power has continued to increase, and sensors have become more accurate, the corresponding advent of systems that are cognitive-and-immersive (CAI) has come to pass. CAI systems fall squarely into the intersection of AI with HCI/HRI: such systems interact with and assist the human agents that enter them, in no small part because such systems are infused with AI able to understand and reason about these humans and their beliefs, goals, and plans. We herein explain our approach to engineering CAI systems. We emphasize the capacity of a CAI system to develop and reason over a "theory of the mind" of its humans partners. This capacity means that the AI in question has a sophisticated model of the beliefs, knowledge, goals, desires, emotions, etc. of these humans. To accomplish this engineering, a formal framework of very high expressivity is needed. In our case, this framework is a \textit{cognitive event calculus}, a partciular kind of quantified multi-modal logic, and a matching high-expressivity planner. To explain, advance, and to a degree validate our approach, we show that a calculus of this type can enable a CAI system to understand a psychologically tricky scenario couched in what we call the \textit{cognitive blockworld framework} (CBF). CBF includes machinery able to represent and plan over not merely blocks and actions, but also agents and their mental attitudes about other agents.

* Submitted to IAAI'18 

  Click for Model/Code and Paper
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

Nov 09, 2018
Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, Biplav Srivastava

While machine learning (ML) models are being increasingly trusted to make decisions in different and varying areas, the safety of systems using such models has become an increasing concern. In particular, ML models are often trained on data from potentially untrustworthy sources, providing adversaries with the opportunity to manipulate them by inserting carefully crafted samples into the training set. Recent work has shown that this type of attack, called a poisoning attack, allows adversaries to insert backdoors or trojans into the model, enabling malicious behavior with simple external backdoor triggers at inference time and only a blackbox perspective of the model itself. Detecting this type of attack is challenging because the unexpected behavior occurs only when a backdoor trigger, which is known only to the adversary, is present. Model users, either direct users of training data or users of pre-trained model from a catalog, may not guarantee the safe operation of their ML-based system. In this paper, we propose a novel approach to backdoor detection and removal for neural networks. Through extensive experimental results, we demonstrate its effectiveness for neural networks classifying text and images. To the best of our knowledge, this is the first methodology capable of detecting poisonous data crafted to insert backdoors and repairing the model that does not require a verified and trusted dataset.


  Click for Model/Code and Paper
Visualizations for an Explainable Planning Agent

Feb 08, 2018
Tathagata Chakraborti, Kshitij P. Fadnis, Kartik Talamadupula, Mishal Dholakia, Biplav Srivastava, Jeffrey O. Kephart, Rachel K. E. Bellamy

In this paper, we report on the visualization capabilities of an Explainable AI Planning (XAIP) agent that can support human in the loop decision making. Imposing transparency and explainability requirements on such agents is especially important in order to establish trust and common ground with the end-to-end automated planning system. Visualizing the agent's internal decision-making processes is a crucial step towards achieving this. This may include externalizing the "brain" of the agent -- starting from its sensory inputs, to progressively higher order decisions made by it in order to drive its planning components. We also show how the planner can bootstrap on the latest techniques in explainable planning to cast plan visualization as a plan explanation problem, and thus provide concise model-based visualization of its plans. We demonstrate these functionalities in the context of the automated planning components of a smart assistant in an instrumented meeting space.

* PREVIOUSLY Mr. Jones -- Towards a Proactive Smart Room Orchestrator (appeared in AAAI 2017 Fall Symposium on Human-Agent Groups) 

  Click for Model/Code and Paper
Trusted Multi-Party Computation and Verifiable Simulations: A Scalable Blockchain Approach

Sep 22, 2018
Ravi Kiran Raman, Roman Vaculin, Michael Hind, Sekou L. Remy, Eleftheria K. Pissadaki, Nelson Kibichii Bore, Roozbeh Daneshvar, Biplav Srivastava, Kush R. Varshney

Large-scale computational experiments, often running over weeks and over large datasets, are used extensively in fields such as epidemiology, meteorology, computational biology, and healthcare to understand phenomena, and design high-stakes policies affecting everyday health and economy. For instance, the OpenMalaria framework is a computationally-intensive simulation used by various non-governmental and governmental agencies to understand malarial disease spread and effectiveness of intervention strategies, and subsequently design healthcare policies. Given that such shared results form the basis of inferences drawn, technological solutions designed, and day-to-day policies drafted, it is essential that the computations are validated and trusted. In particular, in a multi-agent environment involving several independent computing agents, a notion of trust in results generated by peers is critical in facilitating transparency, accountability, and collaboration. Using a novel combination of distributed validation of atomic computation blocks and a blockchain-based immutable audits mechanism, this work proposes a universal framework for distributed trust in computations. In particular we address the scalaibility problem by reducing the storage and communication costs using a lossy compression scheme. This framework guarantees not only verifiability of final results, but also the validity of local computations, and its cost-benefit tradeoffs are studied using a synthetic example of training a neural network.

* 16 pages, 8 figures 

  Click for Model/Code and Paper