Publications

For several papers (clearly marked in the CV), the names of authors are sorted alphabetically and the order does not reflect contribution. Such practices are very common in theoretical areas of computer science. Here is the 2004 statement by AMS (American Mathematical Society) that explains the prevalance of this culture in mathematical areas.
Google Scholar Collaborators

Book Chapter

2021
Approximate Model Counting
Supratik Chakraborty, Kuldeep S. Meel, and Moshe Y. Vardi
Chapter 26 of Handbook of Satisfiability, 2nd Edition

Theses

2017
Constrained Counting and Sampling: Bridging the Gap between Theory and Practice
PhD Thesis, Rice University, September 2017
Winner of 2018 Ralph Budd Award for Best Engineering Thesis
Honorable mention, 2018 ACP Doctoral Dissertation Award
2014
Sampling Techniques for Boolean Satisfiability
Masters Thesis, Rice University, April 2014
Winner of 2014 VCLA (Vienna Center of Logic and Algorithms) Outstanding Masters Thesis
Announcement

Tutorials

2021
Logic-Enabled Verification and Explanation of ML Models   .
Co-presented with Alexey Ignatiev, Joao Marques-Silva, and Nina Narodytska
International Joint Conference on Artificial Intelligence (IJCAI 2020, held in Jan 2021)
2020
Rigorous Verification and Explanation of ML Models   .
Co-presented with Alexey Ignatiev, Joao Marques-Silva, and Nina Narodytska
AAAI Conference on Artificial Intelligence (AAAI 2020)
2018
Scaling Discrete Integration and Sampling: Foundations and Challenges   .
Co-presented with Supratik Chakraborty
International Joint Conference on Artificial Intelligence (IJCAI 2018)
2017
Discrete Sampling and Integration for the AI Practitioner   
Co-presented with Supratik Chakraborty and Moshe Y. Vardi
AAAI Conference on Artificial Intelligence (AAAI 2017)
2016
Discrete Sampling and Integration in High Dimensional Spaces      
Co-Presented with Supratik Chakraborty and Moshe Y. Vardi
Conference on Uncertainity in Artificial Intelligence (UAI 2016)

Conference Papers (Refereed and Archived)

2021
[c66]
Engineering an Efficient Boolean Functional Synthesis Engine
Priyanka Golia, Friedrich Slivovsky, Subhajit Roy, and Kuldeep S. Meel
Proceedings of IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 2021.
Given a Boolean specification between a set of inputs and outputs, the problem of Boolean functional synthesis is to synthesise each output as a function of inputs such that the specification is met. Although the past few years have witnessed intense algorithmic development, accomplishing scalability remains the holy grail. The state of-the-art approach combines machine learning and automated reasoning to synthesise Boolean functions efficiently. In this paper, we propose four algorithmic improvements for a data-driven framework for functional synthesis: using a dependency-driven multi-classifier to learn candidate function, extracting uniquely defined functions by interpolation, variables retention, and using lexicographic MaxSAT to repair candidates. We integrate these improvements into the state-of-the-art frame- work, called Manthan. The resulting framework, Manthan2, shows significantly improved runtime performance compared to Manthan. In an extensive experimental evaluation on 609 benchmarks, Manthan2 is able to synthesise a Boolean function vector for 509 instances compared to 356 instances solved by Manthan - an increment of 153 instances over the state of-the-art. To put this into perspective, Manthan improved on the prior state of the art by only 76 instances.
[c65]
Gaussian Elimination meets Maximum Satisfiability
Mate Soos and Kuldeep S. Meel
Proceedings of International Conference on Knowledge Representation and Reasoning (KR), 2021.
Given a set of constraints F and a weight function W over the assignments, the problem of MaxSAT is to compute a maximum weighted solution of $F$. MaxSAT is a fundamental problem with applications in numerous areas. The success of MaxSAT solvers has prompted researchers in AI and formal methods communities to develop algorithms that can use MaxSAT solver as oracle. One such problem that stands to benefit from advances in MaxSAT solving is discrete integration. Recently, Ermon et al. achieved a significant breakthrough by reducing the problem of integration to polynomially many queries to an optimization oracle where $F$ is conjuncted with randomly chosen XOR constraints. Unlike approximate model counting, where hashing-based approaches have been able to achieve scalability as well as rigorous formal guarantees, the practical evaluation of Ermon et al's approach, called WISH, often sacrifice theoretical guarantees, largely due to lack of existing MaxSAT solvers with native XOR support. The primary contribution of this paper is a new MaxSAT solver, GaussMaxHS, with built-in XOR support. The architecture of GaussMaxHS is inspired by CryptoMiniSAT, which has been the workhorse of hashing-based approximate model counting techniques. The resulting solver, GaussMaxHS, outperforms MaxHS over 9628 benchmarks arising from spin glass models and network reliability domains. In particular, with a timeout of 5000 seconds, MaxHS could solve only 5473 benchmarks while GaussMaxHS could solve 6120 benchmarks.
[c64]
Designing Samplers is Easy: The Boon of Testers
Priyanka Golia, Mate Soos, Sourav Chakraborty, and Kuldeep S. Meel
Proceedings of Formal Methods in Computer-Aided Design (FMCAD), 2021.
Given a formula F, the problem of uniform sampling seeks to sample solutions of F uniformly at random. Uniform sampling is a fundamental problem with a wide variety of applications. The computational intractability of uniform sampling has led to the development of several samplers that are heuristics and are not accompanied by theoretical analysis of their distribution. Recently, Chakraborty and Meel (2019) designed the first scalable sampling tester, Barbarik, based on a grey-box sampling technique for testing if the distribution, according to which the given sampler is sampling, is close to the uniform or far from uniform. While the theoretical analysis of Barbarik provides only unconditional soundness guarantees, the empirical evaluation of Barbarik did show its success in determining that some of the off-the-shelf samplers were far from a uniform sampler. The availability of Barbarik has the potential to spur the development of samplers and testing techniques such that developers can design sampling methods that can be accepted by Barbarik even though these samplers may not be amenable to a detailed mathematical analysis. In this paper, we present the realization of this aforementioned promise. Based on the flexibility offered by CryptoMiniSat, we design a sampler CMSGen that promises the achievement of the sweet spot of the quality of distributions and runtime performance. In particular, CMSGen achieves significant runtime performance improvement over the existing samplers. We conduct two case studies, and demonstrate that the usage of CMSGen leads to significant runtime improvements in the context of combinatorial testing and functional synthesis. A salient strength of our work is the simplicity of CMSGen, which stands in contrast to complicated algorithmic schemes developed in the past that fail to attain the desired quality of distributions with practical runtime performance.
[c63]
Engineering an Efficient PB-XOR Solver
Jiong Yang and Kuldeep S. Meel
Proceedings of International Conference on Constraint Programming (CP), 2021.
Model counting is a fundamental problem in computer science with a wide range of applications such as quantitatively property verification, probabilistic reasoning, and network reliability. Given a formula F, the problem of model counting is to compute the number of solutions of F. The formula F can be a CNF or Pseudo-Boolean (PB) formula or any other form. In the past few years, we have witnessed the rise of hashing-based approximate model counting over CNF formulas on top of CNF-XOR solving. For lack of PB-XOR solving techniques, for the PB formula, the existing approach trivially encodes PB constraints into CNF and applies CNF counting technique, which fails to utilize more powerful pseudo-Boolean reasoning. In this paper, we handle PB-XOR solving and present the first approximate model counter over pseudo-Boolean constraints to leverage the power of PB reasoning based on recent improvements in PB solving. We engineer an efficient counter named ApproxMC-PB, equipping PB solver RoundingSat with Gauss-Jordan-Elimination-based tactics to bridge the gap between PB and XOR constraints. Extensive experiments on quantitative verification of binarized neural network properties reveal that ApproxMC-PB significantly outperforms state-of-the-art counters on pseudo-Boolean benchmarks, and there is ample room for optimizing the interaction between PB and XOR constraints.
[c62]
Program Synthesis as Dependency Quantified Formula Modulo Theory
Priyanka Golia, Subhajit Roy, and Kuldeep S. Meel
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2021.
Given a specification G(X, Y ) over inputs X and output Y and defined over a background theory T, the problem of program synthesis is to design a program f such that Y = f (X), satisfies the specification G. Over the past decade, syntax-guided synthesis (SyGuS) has emerged as a dominant approach to program synthesis where in addition to the specification G, the end-user also specifies a grammar L to aid the underlying synthesis engine. This paper investigates the feasibility of synthesis techniques without grammar, a sub-class defined as T constrained synthesis. We show that T-constrained synthesis can be reduced DQF(T),i.e., to the problem of finding a witness of a dependency quantified formula modulo theory. When the underlying theory is bitvectors, the corresponding DQF problem can be further reduced to Dependency Quantified Boolean Formulas (DQBF). We rely on the progress in DQBF solving to design DQBF-based synthesizers that outperform the domain-specific program synthesis techniques; thereby positioning DQBF as a core representation language for program synthesis. Our empirical analysis shows that T-constrained synthesis can achieve significantly better scalability than syntax-guided approaches. Furthermore, the general-purpose DQBF solvers perform on par with domain-specific synthesis techniques.
[c61]
Partition Function Estimation: A Quantitative Study
Durgesh Agrawal, Yash Pote, and Kuldeep S. Meel
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2021.
Probabilistic graphical models have emerged as a powerful modeling tool for several real-world scenarios where one needs to reason under uncertainty. A graphical model's partition function is a central quantity of interest, and its computation is key to several probabilistic reasoning tasks. Given the #P-hardness of computing the partition function, several techniques have been proposed over the years with varying guarantees on the quality of estimates and their runtime behavior. This paper seeks to present a survey of 17 techniques and a rigorous empirical study of their behavior across an extensive set of benchmarks. Our empirical study draws up a surprising observation: exact techniques are as efficient as the approximate ones, and therefore, we conclude with an optimistic view of opportunities for the design of approximate techniques with enhanced scalability. Motivated by the observation of an order of magnitude difference between the Virtual Best Solver and the best performing tool, we envision an exciting line of research focused on the development of portfolio solvers.
[c60]
Counting Minimal Unsatisfiable Subsets
Jaroslav Bendik and Kuldeep S. Meel
Proceedings of International Conference on Computer-Aided Verification (CAV), 2021.
Given an unsatisfiable Boolean formula F in CNF, an unsatisfiable subset of clauses U of F is called Minimal Unsatisfiable Subset (MUS) if every proper subset of U is satisfiable. Since MUSes serve as explanations for the unsatisfiability of F, MUSes find applications in a wide variety of domains. The availability of efficient SAT solvers has aided the development of scalable techniques for finding and enumerating MUSes in the past two decades. Building on the recent developments in the design of scalable model counting techniques for SAT, Bendik and Meel initiated the study of MUS counting techniques. They succeeded in designing the first approximate MUS counter, AMUSIC, that does not rely on exhaustive MUS enumeration. AMUSIC, however, suffers from two shortcomings: the lack of exact estimates and limited scalability due to its reliance on 3-QBF solvers. In this work, we address the two shortcomings of AMUSIC by designing the first exact MUS counter, CountMUST, that does not rely on exhaustive enumeration. CountMUST circumvents the need for 3-QBF solvers by reducing the problem of MUS counting to projected model counting. While projected model counting is #NP-hard, the past few years have witnessed the development of scalable projected model counters. An extensive empirical evaluation demonstrates that CountMUST successfully returns MUS count for 1500 instances while AMUSIC and enumeration-based techniques could only handle up to 833 instances.
[c59]
Leveraging GPUs for Effective Clause Sharing in Parallel SAT Solving
Nicolas Prevot, Mate Soos, and Kuldeep S. Meel
Proceedings of the International Conference on Theory and Applications of Satisfiability Testing (SAT), 2021.
The past two decades have witnessed an unprecedented improvement in runtime performance of SAT solvers owing to clever software engineering and creative design of data structure. Yet, most entries in the annual SAT competition retain the core architecture of MiniSat, which was designed from the perspective of single core CPU architectures. On the other hand, since 2005, there has been a significant shift to heterogeneous architectures owing to the impending end of Dennard scaling. Consequently, it is no coincidence that the recent breakthroughs in computer science have significantly utilized opportunities offered by such heterogeneous architectures. The primary contribution of this work is a novel multi-threaded CDCL-based framework, called GpuShareSat, designed to take advantage of CPU+GPU achitecture. The core underlying principle of our approach is to divide the tasks among CPU and GPU so as to attempt to achieve the best of both worlds. We observe that efficient bit-vector based operations can allow a GPU to efficiently determine the usefulness of a learnt clause to different threads and accordingly notifies the thread of the presence of relevant clauses in different threads. The approach of checking all clauses against all different assignments from different threads allows the GPU to exploit its potential for massive parallelism through clever group-testing strategy and bitwise operations. Our setup efficiently distributes the work between the CPU and the GPU, each performing the task they are best at to further the speed of parallel SAT solving. To demonstrate the practical efficiency of our framework, we augment the state of the art multi-threaded solvers glucose-syrup with GpuShareSat and perform detailed analysis on benchmarks from SAT 2020 competition. Our empirical analysis demonstrates that augmentation of glucose-syrup augmented with GpuShareSat solves 22 more instances than glucose-syrup alone.
[c58]
On the Usefulness of Linear Modular Arithmetic in Constraint Programming
Gilles Pesant, Kuldeep S. Meel, and Mahshid Mohammadalitajrishi
Proceedings of International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR), 2021.
Linear modular constraints are a powerful class of constraints that arise naturally in cryptanalysis, checksums, hash functions, and the like. Given their importance, the past few years have witnessed the design of combinatorial solvers with native support for linear modular constraints, and the availability of such solvers has led to the emergence of new applications. While there exist global constraints inCPthat consider congruence classes over domain values,linear modular arithmetic constraints have yet to appear in the global constraint catalogue despite their past investigation in the context of model counting for CSPs. In this work we seek to remedy the situation by advocating the integrationof linear modular constraints in state-of-the-ar tCP solvers.Contrary to previous belief, we conclude from an empirical investigation that Gauss-Jordan Elimination based techniques can provide an efficient and scalable way to handle linear modular constraints. On the theoretical side, we remark onthe pairwise independence offered by hash functions based on linear modular constraints, and then discuss the design of hashing-based model counters for CP,supported by empirical results showing the accuracy and computational savings that can be achieved. We further demonstrate the usefulness of native support for linear modular constraints with applications to checksums and model counting
[c57]
Estimating the Size of Unions of Sets in Streaming Models
Kuldeep S. Meel ⓡ N.V. Vinodchandran ⓡ Sourav Chakraborty
Proceedings of ACM Symposium on Principles of Database Systems (PODS), 2021.
In this paper we study the problem of estimating the size of the union of sets $S_1, \dots, S_M$ where each set $S_i \subseteq \Omega$ (for some discrete universe $\Omega$) is implicitly presented and comes in a streaming fashion. We define the notion of Delphic sets to capture the class of streaming problems where membership, sampling, and counting calls are efficient. In particular, we show our notion of Delphic sets capture three well-known problems: Klee's measure, test coverage estimation, and model counting of DNF formulas. The Klee's measure problem corresponds to computation of volume of multi-dimension axis aligned rectangles, i.e., every $d$-dimension axis-aligned rectangle can be defined as $[a_1,b_1] \times [a_2,b_2] \times \ldots \times [a_d, b_d]$. The problem of test coverage estimation focuses on the computation of coverage measure for a given testing array in the context of combinatorial testing, which is a fundamental technique in the context of hardware and software testing. Finally, given a DNF formula $\varphi = T_1 \vee T_2 \ldots T_m$, the problem of model counting seeks to compute the number of satisfying assignments of $\varphi$. The primary contribution of our work is a simple, elegant, and efficient sampling-based algorithm, called {\hybrid}, for estimation of union in streaming setting. Our algorithm has the worst case space complexity and update time of O((\log |\Omega|) . \frac{(\log M) + \log(\frac{1}{\delta})}{\varepsilon^{2}}). Consequently, our algorithm provides the first algorithm with linear dependence on $d$ for Klee's measure problem in streaming setting for d>1, thereby settling the open problem of Woodruff and Tirthpura (PODS-12). Furthermore, a straightforward application of our algorithm lends to an efficient algorithm for coverage estimation problem in streaming setting. We then investigate whether the space complexity for coverage estimation can be further improved, and in this context, we present another streaming algorithm that uses near-optimal $O(t\log n/\varepsilon^2)$ space complexity but uses an update algorithm that is in P^NP, thereby showcasing an interesting time vs space trade-off in the streaming setting. Finally, we demonstrate the generality of our Delphic sets by obtaining a streaming algorithm for model counting of DNF formulas. It is worth remarking that we view a key strength of our work is the simplicity of both the algorithm and its theoretical analysis, which makes it amenable to practical implementation and the easy adoption.
[c56]
Model Counting meets F_0 Estimation
A. Pavan ⓡ N.V. Vinodchandran ⓡ Arnab Bhattacharya ⓡ Kuldeep S. Meel
Proceedings of ACM Symposium on Principles of Database Systems (PODS), 2021.
Constraint satisfaction problems (CSP's) and data stream models are two powerful abstractions to capture a wide variety of problems arising in different domains of computer science. Developments in the two communities have mostly occurred independently and with little interaction between them. In this work, we seek to investigate whether bridging the seeming communication gap between the two communities may pave the way to richer fundamental insights. To this end, we focus on two foundational problems: model counting for CSP's and computation of zeroth frequency moments $(F_0)$ for data streams. Our investigations lead us to observe striking similarity in the core techniques employed in the algorithmic frameworks that have evolved separately for model counting and $F_0$ computation. We design a recipe for translation of algorithms developed for $F_0$ estimation to that of model counting, resulting in new algorithms for model counting. We then observe that algorithms in the context of distributed streaming can be transformed to distributed algorithms for model counting. We next turn our attention to viewing streaming from the lens of counting and show that framing $F_0$ estimation as a special case of \#DNF counting allows us to obtain a general recipe for a rich class of streaming problems, which had been subjected to case-specific analysis in prior works. In particular, our view yields a state-of-the art algorithm for multidimensional range efficient $F_0$ estimation with a simpler analysis.
[c55]
Scalable Quantitative Verification For Deep Neural Networks
Teodora Baluta, Zheng Leong Chua, Kuldeep S. Meel, and Prateek Saxena
Proceedings of International Conference on Software Engineering (ICSE), 2021.
Verifying security properties of deep neural networks (DNNs) is becoming increasingly important. This paper introduces a new quantitative verification framework for DNNs that can decide, with user-specified confidence, whether a given logical property {\psi} defined over the space of inputs of the given DNN holds for less than a user-specified threshold,{\theta}. We present new algorithms that are scalable to large real-world models as well as proven to be sound. Our approach requires only black-box access to the models. Further, it certifies properties of both deterministic and non-deterministic DNNs. We implement our approach in a tool called PROVERO. We apply PROVERO to the problem of certifying adversarial robustness. In this context, PROVERO provides an attack-agnostic measure of robustness for a given DNN and a test input. First, we find that this metric has a strong statistical correlation with perturbation bounds reported by 2 of the most prominent white-box attack strategies today. Second, we show that PROVERO can quantitatively certify robustness with high confidence in cases where the state-of-the-art qualitative verification tool (ERAN) fails to produce conclusive results. Thus, quantitative verification scales easily to large DNNs.
[c54]
The Power of Literal Equivalence in Model Counting
Yong Lai, Kuldeep S. Meel, and Roland Yap
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2021.
The past two decades have seen the significant improvements of the scalability of practical model counters, which have been quite influential in many applications from artificial intelligence to formal verification. While most of exact counters fall into two categories, search-based and compilation-based, Huang and Darwiche's remarkable observation ties these two categories: the trace of a search-based exact model counter corresponds to a Decision-DNNF formula. Taking advantage of literal equivalences, this paper designs an efficient model counting technique such that its trace is a generalization of Decision-DNNF formula. We first propose a generalization of Decision-DNNF, called CCDD, to capture literal equivalences, then show that CCDD supports model counting in linear time, and finally design a model counter, called ExactMC, whose trace corresponds to CCDD. We perform an extensive experimental evaluation over a comprehensive set of benchmarks and conduct performance comparison of ExactMC vis-a-vis the state of the art counters, c2d, miniC2D, D4, ADDMC, and Ganak. Our empirical evaluation demonstrates ExactMC can solve 885 instances while the prior state of the art could solve only 843 instances, representing a significant improvement of 42 instances.
[c53]
Symmetric Component Caching for Model Counting on Combinatorial Instances
Timothy Bremen, Vincent Derkinderen, Shubham Sharma, Subhajit Roy, and Kuldeep S Meel
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2021.
Given a propositional formula $\psi$, the model counting problem, also referred to as #SAT, seeks to compute the number of satisfying assignments (or models) of $\psi$. Modern search-based model counting algorithms are built on conflict-driven clause learning, combined with the caching of certain subformulas (called components) encountered during the search process. Despite significant progress in these algorithms over the years, state-of-the-art model counters often struggle to handle large but structured instances that typically arise in combinatorial settings. Motivated by the observation that these counters do not exploit the inherent symmetries exhibited in such instances, we revisit the component caching architecture employed in current counters and introduce a novel caching scheme that focuses on identifying symmetric components. We first prove the soundness of our approach, and then integrate it into the state-of-the-art model counter GANAK. Our extensive experiments on hard combinatorial instances demonstrate that the resulting counter, SymGANAK, leads to improvements over GANAK both in terms of PAR-2 score and the number of instances solved.
[c52]
Counting Maximal Satisfiable Subsets
Jaroslav Bendik and Kuldeep S. Meel
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2021.
Given an unsatisfiable set of constraints F, a maximal satisfiable subset (MSS) is a maximal subset of constraints C such that C is satisfiable. Over the past two decades, the steady improvement in runtime performance of algorithms for finding MSS has led to an increased adoption of MSS-based techniques in wide variety of domains. Motivated by the progress in finding an MSS, the past decade has witnessed a surge of interest in design of algorithmic techniques to enumerate all the MSSes, which has subsequently led to discovery of new applications utilizing enumeration of MSSes. The development of techniques for finding and enumeration of MSSes mirrors a similar phenomenon of finding and enumeration of SAT solutions in the early 2000s, which subsequently motivated design of algorithmic techniques for model counting. In a similar spirit, we undertake study to investigate the feasibility of MSS counting techniques. In particular, the focus point of our investigation is to answer whether one can design efficient MSS counting techniques that do not rely on explicit MSS enumeration. The primary contribution of this work is an affirmative answer to the above question. Our tool, CountMSS, uses a novel architecture of a wrapper W and a remainder R such that the desired MSS count can be expressed as |W|- |R|. CountMSS relies on the advances in projected model counting to efficiently compute |W| and |R|. Our empirical evaluation demonstrates that CountMSS is able to scale to instances clearly beyond the reach of enumeration-based techniques.
[c51]
Predicting Forest Fire Using Remote Sensing Data And Machine Learning
Suwei Yang, Massimo Lupascu, and Kuldeep S. Meel
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2021.
Over the last few decades, deforestation and climate change have caused increasing number of forest fires. In SoutheastAsia, Indonesia has been the most affected country by tropical peatland forest fires. These fires have a significant impact on the climate resulting in extensive health, social and economic issues. Existing forest fire prediction systems, such as the Canadian Forest Fire Danger Rating System, are based on handcrafted features and require installation and maintenance of expensive instruments on the ground, which can be a challenge for developing countries such as Indonesia. We propose a novel, cost-effective, machine-learning based approach that uses remote sensing data to predict forest fires in Indonesia.Our prediction model achieves more than 0.81 area under the receiver operator characteristic (ROC) curve, performing significantly better than the baseline approach which never exceeds 0.70 area under ROC curve on the same tasks. Our model's performance remained above 0.81 area under ROC curve even when evaluated with reduced data. The results support our claim that machine learning based approaches can lead to reliable and cost-effective forest fire prediction systems.
[c50]
Justicia: A Stochastic SAT Approach to Formally Verify Fairness
Bishwamittra Ghosh, Debabrota Basu, and Kuldeep S. Meel
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2021.
As a technology ML is oblivious to societal good or bad, and thus, the field of fair machine learning has stepped up to propose multiple mathematical definitions, algorithms, and systems to ensure different notions of fairness in ML applications. Given the multitude of propositions, it has become imperative to formally verify the fairness metrics satisfied by different algorithms on different datasets. In this paper, we propose a stochastic satisfiability (SSAT) framework, Justicia, that formally verifies different fairness measures of supervised learning algorithms with respect to the underlying data distribution. We instantiate Justicia on multiple classification and bias mitigation algorithms, and datasets to verify different fairness metrics, such as disparate impact, statistical parity, and equalized odds. Justicia is scalable, accurate, and operates on non-Boolean and compound sensitive attributes unlike existing distribution-based verifiers, such as FairSquare and VeriFair. Being distribution-based by design, Justicia is more robust than the verifiers, such as AIF360, that operate on specific test samples. We also theoretically bound the finite-sample error of the verified fairness measure.
2020
[c49]
Efficient Distance Approximation for Structured High-Dimensional Distributions via Learning
Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel, and N. V. Vinodchandran
Proceedings of Advances in Neural Information Processing Systems(NeurIPS), 2020.
We design efficient distance approximation algorithms for several classes of well-studied structured high-dimensional distributions. Specifically, we present algorithms for the following problems (where dTV is the total variation distance): - Given sample access to two Bayesian networks P1 and P2 over known directed acyclic graphs G1 and G2 having n nodes and bounded in-degree, approximate dTV(P1, P2) to within additive error $\varepsilon$ using poly(n, 1/$\varepsilon$) samples and time. - Given sample access to two ferromagnetic Ising models P1 and P2 on n variables with bounded width, approximate dTV(P1 , P2 ) to within additive error $\varepsilon$ using poly(n, 1/$\varepsilon$) samples and time. - Given sample access to two n-dimensional Gaussians P1 and P2, approximate dTV(P1, P2) to within additive error $\varepsilon$ using poly(n, 1/$\varepsilon$) samples and time. - Given access to observations from two causal models P and Q on n variables that are defined over known causal graphs, approximate dTV(Pa , Qa ) to within additive error $\varepsilon$ using poly(n, 1/$\varepsilon$) samples and time, where Pa and Qa are the interventional distributions obtained by the intervention do(A = a) on P and Q respectively for a particular variable A. The distance approximation algorithms immediately imply new tolerant closeness testers for the corresponding classes of distributions. Prior to our work, only non-tolerant testers were known for both Bayes net distributions and Ising models, and no testers with quantitative guarantee were known for interventional distributions. To best of our knowledge, efficient distance approximation algorithms for Gaussian distributions were not present in the literature. Our algorithms are designed using a conceptually simple but general framework that is applicable to a variety of scenarios.
[c48]
Taming Discrete Integration via the Boon of Dimensionality
Jeffrey M. Dudek, Dror Fried, and Kuldeep S. Meel
Proceedings of Advances in Neural Information Processing Systems(NeurIPS), 2020.
Discrete integration is a fundamental problem in computer science that concerns the computation of discrete sums over exponentially large sets. Despite intense interest from researchers for over three decades, the design of scalable techniques for computing estimates with rigorous guarantees for discrete integration remains the holy grail. The key contribution of this work addresses this scalability challenge via an efficient reduction of discrete integration to model counting. The proposed reduction is achieved via a significant increase in the dimensionality that, contrary to conventional wisdom, leads to solving an instance of the relatively simpler problem of model counting. Building on the promising approach proposed by Chakraborty et al, our work overcomes the key weakness of their approach: a restriction to dyadic weights. We augment our proposed reduction, called DeWeight, with a state of the art efficient approximate model counter and perform detailed empirical analysis over benchmarks arising from neural network verification domains, an emerging application area of critical importance. DeWeight, to the best of our knowledge, is the first technique to compute estimates with provable guarantees for this class of benchmarks.
[c47]
On Testing of Samplers
Kuldeep S. Meel ⓡ Yash Pote ⓡ Sourav Chakraborty
Proceedings of Advances in Neural Information Processing Systems(NeurIPS), 2020.
Given a set of items F and a weight function W, the problem of sampling seeks to sample an item proportional to its weight. Sampling is a fundamental problem in machine learning. The daunting computational complexity of sampling with formal guarantees leads designers to propose heuristics-based techniques for which no rigorous theoretical analysis exists to quantify the quality of generated distributions. This poses a challenge in designing a testing methodology to test whether a sampler under test generates samples according to a given distribution. Only recently, Chakraborty and Meel (2019) designed the first scalable verifier, called Barbarik1, for samplers in the special case when the weight function W is constant, that is, when the sampler is supposed to sample uniformly from F . The techniques in Barbarik1, however, fail to handle general weight functions. The primary contribution of this paper is an affirmative answer to the above challenge: motivated by Barbarik1 but using different techniques and analysis, we design Barbarik2 an algorithm to test whether the distribution generated by a sampler is $\varepsilon$-close or $\eta$-far from any target distribution. In contrast to black-box sampling techniques that require a number of samples proportional to |F| , Barbarik2 requires only $\mathcal{O}(titl(W,\varphi)2/\eta(\eta-6\varepsilon)3)$ samples, where the tilt is the maximum ratio of weights of two satisfying assignments. Barbarik2 can handle any arbitrary weight function. We present a prototype implementation of Barbarik2 and use it to test three state-of-the-art samplers.
[c46]
Baital: An Adaptive Weighted Sampling Approach for Improved t-wise Coverage
Eduard Baranov, Axel Legay, and Kuldeep S. Meel
Proceedings of ACM Joint European Software Engineering Conference and Symposium on the Founda-tions of Software Engineering (ESEC/FSE), 2020.
The rise of highly configurable complex software and its widespread usage requires design of efficient testing methodology. T-wise coverage is a leading metric to measure the quality of the testing suite and the underlying test generation engine. While uniform sampling based test generation is widely believed to be the state of the art approach to achieve t-wise coverage in presence of constraints on the set of configurations, uniform sampling fails to achieve high t-wise coverage in presence of complex constraints. In this work, we propose a novel approach Baital, based on adaptive weighted sampling using literal weighted functions, to generate test sets with high t-wise coverage. We demonstrate that our approach leads to significantly high t-wise coverage. The novel usage of literal weighted sampling leaves open several interesting directions, empirical as well as theoretical, for future research.
[c45]
Phase Transition Behavior in Knowledge Compilation
Rahul Gupta, Subhajit Roy, and Kuldeep S. Meel
Proceedings of International Conference on Constraint Programming (CP), 2020.
The success of SAT solvers has led to development of algorithmic paradigms for problems beyond NP such as knowledge compilation, which seeks to compile proposition theory into succinct and tractable representations. The study of phase transition behaviour in SAT has been subjected to intense theoretical and empirical investigations, and has led to deeper understanding and algorithmic improvements of modern SAT solvers. Motivated by the prior studies of phase transition in SAT, we seek to study the behaviour of size and compile-time behaviour for random k-CNF formulas. The primary contribution of this paper is a rigorous empirical study and analysis of the size and runtime behavior for different knowledge compilation forms (and their corresponding compilation algorithms): d-DNNFs, SDDs and OBDDs. We employ instances generated from random k-CNF model with varying generation parameters to empirically reason about the expected and median behavior of size and compilation-time for these languages. A significant novel contribution of our work is the identification of a new control parameter, called solution density, defined as the ratio of the log of number of satisfying assignments to number of variables.
[c44]
Sparse Hashing for Scalable Approximate Model Counting: Theory and Practice
Kuldeep S. Meel ⓡ S. Akshay
Proceedings of Logic in Computer science (LICS), 2020.
Given a CNF formula F on n variables, the problem of model counting, also referred to as #SAT, is to compute the number of models or satisfying assignments of F. Model counting is a fundamental but hard problem in computer science with varied applications. Recent years have witnessed a surge of effort towards developing efficient algorithmic techniques that combine the classical 2-universal hashing (from [Stockmeyer 1983]) with the remarkable progress in SAT solving over the past decade. These techniques augment the CNF formula F with random XOR constraints and invoke an NP oracle repeatedly on the resultant CNF-XOR formulas. In practice, the NP oracle calls are replaced by calls to a SAT solver and it is observed that runtime performance of modern SAT solvers (based on conflict-driven clause learning) on CNF-XOR formulas is adversely affected by the size of XOR constraints. However, the standard construction of 2-universal hash functions chooses every variable with probability p =1/2 leading to XOR constraints of size n/2 in expectation. Consequently, the main challenge is to design sparse hash functions, where variables can be chosen with smaller probability and lead to smaller sized XOR constraints, which can then replace 2-universal hash functions. In this paper, our goal is to address this challenge both from a theoretical and a practical perspective. First, we formalize a relaxation of universal hashing, called concentrated hashing, a notion implicit in prior works to design sparse hash functions. We then establish a novel and beautiful connection between concentration measures of these hash functions and isoperimetric inequalities on boolean hypercubes. This allows us to obtain tight bounds on variance as well as the dispersion index and show that p = O(log m /m ) suffices for the design of sparse hash functions from 2^n to 2^m belonging to concentrated hash family. Finally, we use sparse hash functions belonging to this concentrated hash family to develop new approximate counting algorithms. A comprehensive experimental evaluation of our algorithm on 1896 benchmarks with computational effort of over 20,000 computational hours demonstrates significant speedup compared to existing approaches. To the best of our knowledge, this work is the first study to demonstrate runtime improvement of approximate model counting algorithms through the usage of sparse hash functions, while still retaining strong theoretical guarantees (a la 2-universal hash functions).
[c43]
Designing New Phase Selection Heuristics
Arijit Shaw and Kuldeep S. Meel
Proceedings of the International Conference on Theory and Applications of Satisfiability Testing (SAT), 2020.
CDCL-based SAT solvers have transformed the field of automated reasoning owing to their demonstrated efficiency at handling problems arising from diverse domains. The success of CDCL solvers is owed to the design of clever heuristics that enable the tight coupling of different components. One of the core components is phase selection, wherein the solver, during branching, decides the polarity of the branch to be explored for a given variable. Most of the state-of-the-art CDCL SAT solvers employ phase-saving as a phase selection heuristic, which was proposed to address the potential inefficiencies arising from far-backtracking. In light of the emergence of chronological backtracking in CDCL solvers, we re-examine the efficiency of phase saving. Our empirical evaluation leads to a surprising conclusion: The usage of phase saving and random selection of polarity during chronological backtracking leads to indistinguishable runtime performance in terms of instances solved and PAR-2 score. We introduce Decaying Polarity Score (DPS) to capture the trend of the polarities attained by the variable, and upon observing lack of performance improvement due to DPS, we turn to a more sophisticated heuristic seeking to capture the activity of literals and the trend of polarities: Literal State Independent Decaying Sum (LSIDS). We find the 2019 winning SAT solver, Maple_LCM_Dist_ChronoBTv3, augmented with LSIDS solves 6 more instances while achieving a reduction of over 125 seconds in PAR-2 score, a significant improvement in the context of the SAT competition.
[c42]
On the Sparsity of XORs in Approximate Model Counting
Durgesh Agrawal, Bhavishya, and Kuldeep S. Meel
Proceedings of the International Conference on Theory and Applications of Satisfiability Testing (SAT), 2020.
Given a Boolean formula F, the problem of model counting, also referred to as #SAT, is to compute the number of solutions of F. The hashing-based techniques for approximate counting have emerged as a dominant approach, promising achievement of both scalability and rigorous theoretical guarantees. The standard construction of strongly 2-universal hash functions employs dense XORs (i.e., involving half of the variables in expectation), which is widely known to cause degradation in the runtime performance of state of the art SAT solvers. Consequently, the past few years have witnessed an intense activity in the design of sparse XORs as hash functions. Such constructions have been proposed with beliefs to provide runtime performance improvement along with theoretical guarantees similar to that of dense XORs. The primary contribution of this paper is a rigorous theoretical and empirical analysis to understand the effect of the sparsity of XORs. In contradiction to prior beliefs of applicability of analysis for sparse hash functions to all the hashing-based techniques, we prove a contradictory result. We show that the best-known bounds obtained for sparse XORs are still too weak to yield theoretical guarantees for a large class of hashing- based techniques, including the state of the art approach ApproxMC3. We then turn to a rigorous empirical analysis of the performance benefits of sparse hash functions. To this end, we first design, to the best of our knowledge, the most efficient algorithm called SparseCount2 using sparse hash functions, which achieves at least up to two orders of magnitude performance improvement over its predecessor. Contradicting the current beliefs, we observe that SparseCount2 still falls short of ApproxMC3 in runtime performance despite the usage of dense XORs in ApproxMC3. In conclusion, our work showcases that the question of whether it is possible to use short XORs to achieve scalability while providing strong theoretical guarantees is still wide open.
[c41]
Tinted, Detached, and Lazy CNF-XOR solving and its Applications to Counting and Sampling
Mate Soos, Stephan Gocht, and Kuldeep S. Meel
Proceedings of International Conference on Computer-Aided Verification (CAV), 2020.
Given a Boolean formula, the problem of counting seeks to estimate the number of solutions of F while the problem of uniform sampling seeks to sample solutions uniformly at random. Counting and uniform sampling are fundamental problems in computer science with a wide range of applications ranging from constrained random simulation, probabilistic inference to network reliability and beyond. Despite intense theoretical and empirical investigations, development of scalable techniques for sampling and counting without sacrificing theoretical guarantees remains the holy grail. The past few years have witnessed the rise of hashing-based approaches that use XOR-based hashing and employ SAT solvers to solve the resulting CNF formulas conjuncted with XOR constraints. Since over 99\% of the runtime of hashing-based techniques is spent inside the SAT queries, improving CNF-XOR solvers has emerged as a key challenge. In this paper, we identify the key performance bottlenecks in the recently proposed BIRD architecture, and we focus on overcoming these bottlenecks by accelerating the XOR handling within the SAT solver and on improving the solver integration through a smarter use of (partial) solutions. We integrate BIRD2 with the state of the art approximate model counter, ApproxMC3, and the state of the art almost-uniform model sampler UniGen2. Through an extensive evaluation over a large benchmark set of over 1896 instances, we observe that BIRD2 leads to consistent speed up for both counting and sampling, and in particular, we solve 77 and 51 more instances for counting and sampling respectively.
[c40]
Manthan: A Data-Driven Approach for Boolean Function Synthesis
Priyanka Golia, Subhajit Roy, and Kuldeep S. Meel
Proceedings of International Conference on Computer-Aided Verification (CAV), 2020.
Boolean functional synthesis is a fundamental problem in computer science with wide-ranging applications and has witnessed a surge of interest resulting in progressively improved techniques over the past decade. Despite intense algorithmic development, a large number of problems remain beyond the reach of the current state of the art techniques. Motivated by the progress in machine learning, we propose Manthan, a novel data-driven approach to Boolean functional synthesis. Manthan views functional synthesis as a classification problem, relying on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided refinement and provable verification. On an extensive and rigorous evaluation over 609 benchmarks, we demonstrate that Manthan significantly improves upon the current state of the art, solving 356 benchmarks in comparison to 280, which is the most solved by a state of the art technique; thereby, we demonstrate an increase of 76 benchmarks over the current state of the art. Furthermore, Manthan solves 60 benchmarks that none of the current state of the art techniques could solve. The significant performance improvements, along with our detailed analysis, highlights several interesting avenues of future work at the intersection of machine learning, constrained sampling, and automated reasoning.
[c39]
Approximate Counting of Minimal Unsatisfiable Subsets
Jaroslav Bendik and Kuldeep S. Meel
Proceedings of International Conference on Computer-Aided Verification (CAV), 2020.
Given an unsatisfiable formula F in CNF, i.e. a set of clauses, the problem of Minimal Unsatisfiable Subset (MUS) seeks to identify the minimal subset of clauses N \subseteq F such that N is unsatisfiable. The emerging viewpoint of MUSes as the root causes of unsatisfiability has led MUSes to find applications in a wide variety of diagnostic approaches. Recent advances in finding and enumeration of MUSes have motivated researchers to discover applications that can benefit from rich information about the set of MUSes. One such extension is that of counting the number of MUSes, which has shown to describe the inconsistency metrics for general propositional knowledge bases. The current best approach for MUS counting is to employ a MUS enumeration algorithm, which often does not scale for the cases with a reasonably large number of MUSes. Motivated by the success of hashing-based techniques in the context of model counting, we design the first approximate counting procedure with (epsilon,delta) guarantees, called AMUSIC. Our approach avoids exhaustive MUS enumeration by combining the classical technique of universal hashing with advances in QBF solvers along with a novel usage of union and intersection of MUSes to achieve runtime efficiency. Our prototype implementation of AMUSIC is shown to scale to instances that were clearly beyond the reach of enumeration-based approaches.
[c38]
Classification Rules in Relaxed Logical Form
Bishwamittra Ghosh, Dmitry Malioutov, and Kuldeep S. Meel
Proceedings of European Conference on Artificial Intelligence(ECAI), 2020.
ML algorithms that produce rule-based predictions in Conjunctive Normal form (CNF) or in Disjunctive Normal form (DNF) are arguably some of the most interpretable ones. Although CNF/DNF rules are considered interpretable in practice, propositional logic has other very interpretable representations which are more expressive. In this paper, we generalize CNF/DNF rules and introduce relaxed-CNF rules, which is motivated by the popular usage of checklists in the medical domain. We consider relaxed definitions of standard OR/AND operators which allow exceptions in the construction of a clause and also in the selection of clauses in a rule. While the combinatorial structure of relaxed-CNF rules offers exponential succinctness, the naive learning techniques are computationally expensive. The primary contribution of this paper is to propose a novel incremental mini-batch learning procedure, called CRR, that employs advances in the combinatorial solvers and efficiently learns relaxed-CNF rules. Our experimental analysis demonstrates that CRR can generate relaxed-CNF rules which are more accurate compared to CNF rules and sparser compared to decision lists.
[c37]
Induction Models on N
A. Dileep, Kuldeep S. Meel, and Ammar F. Sabili
Proceedings of International Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR), 2020.
We generalize the definition of an Induction Model given by L. Henkin (1960). The main goal of the paper is to study reduction and equivalence between these Induction Models. We give a formal definition for these concepts and then prove a criterion which can be used to check when one Induction Model can be reduced to or is equivalent to another Induction Model. We also look at the base cases and generating functions which can give us an Induction Model. There are three cases which we look at depending on the structure of the generating functions (arbitrary, additive, multiplicative).
[c36]
A Study of Symmetry Breaking Predicates and Model Counting
Wenxi Wang, Muhammad Usman, Alyas Almaawi, Kaiyuan Wang, Kuldeep S. Meel, and Sarfraz Khurshid
Proceedings of Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 2020.
Propositional model counting is a classic problem that has recently witnessed many technical advances and novel applications. While the basic model counting problem requires computing the number of all solutions to the given formula, in some important application scenarios, the desired count is not of all solutions, but instead, of all unique solutions up to isomorphism. In such a scenario, the user herself must try to either use the full count that the model counter returns to compute the count up to isomorphism, or ensure that the input formula to the model counter adequately captures the symmetry breaking predicates so it can directly report the count she desires. We study the use of CNF-level and domain-level symmetry breaking predicates in the context of the state-of-the-art in model counting, specifically the leading approximate model counter ApproxMC and the recently introduced exact model counter ProjMC. As benchmarks, we use a range of problems, including structurally complex specifications of software systems and constraint satisfaction problems. The results show that while it is sometimes feasible to compute the model counts up to isomorphism using the full counts that are computed by the model counters, doing so suffers from poor scalability. The addition of symmetry breaking predicates substantially assists model counters. Domain-specific predicates are particularly useful, and in many cases can provide full symmetry breaking to enable highly efficient model counting up to isomorphism. We hope our study motivates new research on designing model counters that directly account for symmetries to facilitate further applications of model counting.
[c35]
A MaxSAT-based Framework for Group Testing
Lorenzo Ciampiconi, Bishwamittra Ghosh, Jonathan Scarlett, and Kuldeep S. Meel
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2020.
The success of MaxSAT (maximum satisfiability) solving in recent years has motivated researchers to apply MaxSAT solvers in diverse discrete combinatorial optimization problems. Group testing has been studied as a combinatorial optimization problem, where the goal is to find defective items among a set of items by performing sets of tests on items. In this paper, we propose a MaxSAT-based framework, called MGT, that solves group testing, in particular, the decoding phase of non-adaptive group testing. We extend this approach to the noisy variant of group testing, and propose a compact MaxSAT-based encoding that guarantees an optimal solution. Our extensive experimental results show that MGT can solve group testing instances of 10000 items with 3% defectivity, which no prior work can handle to the best of our knowledge. Furthermore, MGT has better accuracy than the LP-based ap- proach. We also discover an interesting phase transition behavior in the runtime, which reveals the easy-hard-easy nature of group testing.
2019
[c34]
Embedding Symbolic Knowledge into Deep Networks
Yaqi Xie, Ziwei Xu, Mohan S. Kankanhalli, Kuldeep S. Meel, and Harold Soh
Proceedings of Advances in Neural Information Processing Systems(NeurIPS), 2019.
In this work, we aim to leverage prior symbolic knowledge to improve the performance of deep models. We propose a graph embedding network that projects propositional formulae (and assignments) onto a manifold via an augmented Graph Convolutional Network (GCN). To generate semantically-faithful embeddings, we develop techniques to recognize node heterogeneity, and semantic regularization that incorporate structural constraints into the embedding. Experiments show that our approach improves the performance of models trained to perform entailment checking and visual relation prediction. Interestingly, we observe a connection between the tractability of the propositional theory representation and the ease of embedding. Future exploration of this connection may elucidate the relationship between knowledge compilation and vector representation learning.
[c33]
Quantitative Verification of Neural Networks And its Security Applications
Teodora Baluta, Shiqi Shen, Shweta Shine, Kuldeep S. Meel, and Prateek Saxena
Proceedings of ACM Conference on Computer and Communications Security (CCS), 2019.
Neural networks are increasingly employed in safety-critical domains. This has prompted interest in verifying or certifying logically encoded properties of neural networks. Prior work has largely focused on checking existential properties, wherein the goal is to check whether there exists any input that violates a given property of interest. However, neural network training is a stochastic process, and many questions arising in their analysis require probabilistic and quantitative reasoning, i.e., estimating how many inputs satisfy a given property. To this end, our paper proposes a novel and principled framework to quantitative verification of logical properties specified over neural networks. Our framework is the first to provide PAC-style soundness guarantees, in that its quantitative estimates are within a controllable and bounded error from the true count. We instantiate our algorithmic framework by building a prototype tool called NPAQ that enables checking rich properties over binarized neural networks. We show how emerging security analyses can utilize our framework in 3 concrete point applications: quantifying robustness to adversarial inputs, efficacy of trojan attacks, and fairness/bias of given neural networks.
[c32]
Dual Hashing-based Algorithms for Discrete Integration
Alexis Colnet and Kuldeep S. Meel
Proceedings of International Conference on Constraint Programming (CP), 2019.
Given a boolean formula F and a weight function $\rho$, the problem of discrete integration seeks to compute the weight of $F$, defined as the sum of the weights of satisfying assignments. Discrete integration, also known as weighted model counting, is a fundamental problem in computer science with wide variety of applications ranging from machine learning and statistics to physics and infrastructure reliability. Given the intractability of the exact variant, the problem of approximate weighted model counting has been subject to intense theoretical and practical investigations over the years. The primary contribution of this paper is to investigate development of algorithmic approaches for discrete integration. Our framework allows us to derive two different algorithms, which can be seen as dual to each other: \textsf{WISH}, which was already discovered by Ermon et al~\cite{EGSS13c}, and a new algorithm: \textsf{SWITCH}.We argue that these algorithms can be seen as dual to each other, in the sense that their complexities differ only by a permutation of certain parameters. Indeed we show that, for $F$ defined over $n$ variables, a weight function $\rho$ that can be represented using $p$ bits, and a confidence parameter $\delta$, there is a function $f$ and an NP oracle such that \textsf{WISH} makes $\mathcal{O} \left(f(n,p,\delta)\right)$ calls to NP oracle while \textsf{SWITCH} makes $\mathcal{O}\left(f(p,n,\delta)\right)$ queries. We find $f(x,y,\delta)$ polynomial in $x$, $y$ and $1/\delta$, more specifically $f(x,y,\delta) = x\log(y)\log(x/\delta)$. We first focus on striking similarities of both the design process and structure of the two algorithms but then show that despite this quasi-symmetry, the analysis yields time complexities dual to each other. Another contribution of this paper is the use of 3-wise property independence of XOR based hash functions in the analysis of WISH and SWITCH. To the best of our knowledge, this is the first usage of 3-wise independence in deriving stronger concentration bounds.
[c31]
Not All FPRASs are Equal: Demystifying FPRASs for DNF-Counting (Extended Abstract)
Kuldeep S. Meel, Aditya A. Shrotri, and Moshe Y. Vardi
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2019.
Sister Conference Best Papers Track
The problem of counting the number of solutions of a DNF formula, also called #DNF, is a fundamental problem in artificial intelligence with applications in diverse domains ranging from network reliability to probabilistic databases. Owing to the intractability of the exact variant, efforts have focused on the design of approximate techniques for #DNF. Consequently, several Fully Polynomial Randomized Approximation Schemes (FPRASs) based on Monte Carlo techniques have been proposed. Recently, it was discovered that hashing-based techniques too lend themselves to FPRASs for #DNF. Despite significant improvements, the complexity of the hashing-based FPRAS is still worse than that of the best Monte Carlo FPRAS by polylog factors. Two questions were left unanswered in previous works: Can the complexity of the hashing-based techniques be improved? How do the various approaches stack up against each other empirically? In this paper, we first propose a new search procedure for the hashing-based FPRAS that removes the polylog factors from its time complexity. We then present the first empirical study of runtime behavior of different FPRASs for #DNF. The result of our study produces a nuanced picture. First of all, we observe that there is no single best algorithm that outperforms all others for all classes of formulas and input parameters. Second, we observe that the algorithm with the worst time complexity, solves the largest number of benchmarks.
[c30]
Ganak: A Scalable Probabilistic Exact Model Counter
Shubham Sharma, Subhajit Roy, Mate Soos, and Kuldeep S. Meel
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2019.
Given a Boolean formula F, the problem of model counting, also referred to as #SAT, seeks to compute the number of solutions of F. Model counting is a fundamental problem with a wide variety of applications ranging from planning, quantified information flow to probabilistic reasoning and the like. The modern #SAT solvers tend to be either based on static decomposition, dynamic decomposition, or a hybrid of the two. Despite dynamic decomposition based #SAT solvers sharing much of their architecture with SAT solvers, the core design and heuristics of dynamic decomposition-based #SAT solvers has remained constant for over a decade. In this paper, we revisit the architecture of the state-of-the-art dynamic decomposition-based #SAT tool, sharpSAT, and demonstrate that by introducing a new notion of probabilistic component caching and the usage of universal hashing for exact model counting along with the development of several new heuristics can lead to significant performance improvement over state-of-the-art model-counters. In particular, we develop GANAK, a new scalable probabilistic exact model counter that outperforms state-of-the-art exact and approximate model counters sharpSAT and ApproxMC3 respectively, both in terms of PAR-2 score and the number of instances solved. Furthermore, in our experiments, the model count returned by GANAK was equal to the exact model count for all the benchmarks. Finally, we observe that recently proposed preprocessing techniques for model counting benefit exact model counters while hurting the performance of approximate model counters.
[c29]
Phase Transition Behavior of Cardinality and XOR Constraints
Yash Pote, Saurabh Joshi, and Kuldeep S. Meel
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2019.
The runtime performance of modern SAT solvers is deeply connected to the phase transition behavior of CNF formulas. While CNF solving has witnessed significant runtime improvement over the past two decades, the same does not hold for several other classes such as the conjunction of cardinality and XOR constraints, denoted as CARD-XOR formulas. The problem of determining satisfiability of CARD-XOR formulas is a fundamental problem with wide variety of applications ranging from discrete integration in the field of artificial intelligence to maximum likelihood decoding in coding theory. The runtime behavior of random CARD-XOR formulas is unexplored in prior work. In this paper, we present the first rigorous empirical study to characterize the runtime behavior of 1-CARD-XOR formulas. We show empirical evidence of a surprising phase-transition that follows a non-linear tradeoff between CARD and XOR constraints.
[c28]
CrystalBall: Gazing in the Black Box of SAT Solving
Mate Soos, Raghav Kulkarni, and Kuldeep S. Meel
Proceedings of the International Conference on Theory and Applications of Satisfiability Testing (SAT), 2019.
Boolean satisfiability is a fundamental problem in computer science with a wide range of applications including planning, configuration management, design and verification of software/hardware systems. The annual SAT competition continues to witness impressive improvements in the performance of the winning SAT solvers largely thanks to the development of new heuristics arising out of intensive collaborative research in the SAT community. Modern SAT solvers achieve scalability and robustness with complex heuristics that are challenging to understand and explain. Consequently, the development of new algorithmic insights has been largely restricted to expert intuitions and evaluation of the new insights have been restricted to performance measurement in terms of the runtime of solvers or a proxy for the runtime of solvers. In this context, one may ask: whether it is possible to develop a framework to provide white-box access to the execution of SAT solver that can aid both SAT solver developers and users to synthesize algorithmic heuristics for modern SAT solvers? The primary focus of our project is precisely such a framework, which we call CrystalBall More precisely, we propose to view modern conflict-driven clause learning (CDCL) solvers as a composition of classifiers and regressors for different tasks such as branching, clause memory management, and restarting. The primary objective of this paper is to introduce a framework to peek inside the SAT solvers -- CrystalBall -- to the AI and SAT community. The current version of CrystalBall focuses on deriving a classifier to keep or throw away a learned clause. In a departure from recent machine learning based techniques, CrystalBall, employs supervised learning and uses extensive, multi-gigabyte data extracted from runs of a single SAT solver to perform predictive analytics.
[c27]
Assessing Heuristic Machine Learning Explanations with Model Counting
Nina Narodytska, Aditya Shrotri, Kuldeep S. Meel, Alexey Ignatiev, and Joao Marques Silva
Proceedings of the International Conference on Theory and Applications of Satisfiability Testing (SAT), 2019.
Machine Learning (ML) models are widely used in decision making procedures in finance, medicine, education, etc. In these areas, ML outcomes can directly affect humans, e.g.\ by deciding whether a person should get a loan or be released from prison. Therefore, we cannot blindly rely on black box ML models and need to explain the decisions made by them. This motivated the development of a variety of ML-explainer systems, concrete examples of which include LIME and its successor ANCHOR. Due to the heuristic nature of explanations produced by existing tools, it is necessary to validate them. In this work, we propose a SAT-based method to assess the quality of explanations produced by ANCHOR We encode a trained ML model and an explanation for a given prediction as a propositional formula. Then, by using a state-of-the-art approximate model counter, we estimate the quality of the provided explanation as the number of solutions supporting it.
[c26]
WAPS: Weighted and Projected Sampling
Rahul Gupta, Shubham Sharma, Subhajit Roy, and Kuldeep S. Meel
Proceedings of Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 2019.
Previous work on applying Knowledge compilation has focused on uniform sampling over all the variables. Since the constraints are written in high level languages such as Verilog, the popular CNF encoding schemes as Tseitin encoding introduces additional auxiliary variables. The resulting CNF formulas are not equivalent but equisatisfiable. In particular, for a formula $G$ specified in high level language we obtain a CNF formula F such that $G (X) = \exists Y F(X,Y)$. This makes one wonder if it is possible to extend Knowledge compilation based techniques to sample over a subset of variables. Furthermore, languages such as Verilog allow specification of weights to user-defined constraints, so there is a need to sample according to the posterior distribution. In this paper, we provide affirmative question to the above two questions: We propose KUS that samples over user defined subset of variables from posterior distribution for a given prior distribution defined over product spaces.
[c25]
Bosphorus: Bridging ANF and CNF Solvers
Davin Choo, Mate Soos, Kian Ming A. Chai, and Kuldeep S. Meel
Proceedings of Design, Automation, and Test in Europe(DATE), 2019.
Algebraic Normal Form (ANF) and Conjunctive Normal Form (CNF) are commonly used to encode problems in Boolean algebra. ANFs are typically solved via Gr"obner basis algorithms, often using more memory than is feasible; while CNFs are solved using SAT solvers, which cannot exploit the algebra of polynomials naturally. We propose a paradigm that bridges between ANF and CNF solving techniques: the techniques are applied in an iterative manner to emph{learn facts} to augment the original problems. Experiments on over 1,100 benchmarks arising from four different applications domains demonstrate that learnt facts can significantly improve runtime and enable more benchmarks to be solved.
[c24]
BIRD: Engineering an Efficient CNF-XOR SAT Solver and its Applications to Approximate Model Counting
Mate Soos and Kuldeep S. Meel
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2019.
Given a Boolean formula F, the problem of model counting, also referred to as \#SAT is to compute the number of solutions of F. Model counting is a fundamental problem in artificial intelligence with a wide range of applications including probabilistic reasoning, decision making under uncertainty, quantified information flow, and the like. Motivated by the success of SAT solvers, there has been surge of interest in the design of hashing-based techniques for approximate model counting for the past decade. We profiled the state of the art approximate model counter ApproxMC3 and observed that over 99.99\% of time is consumed by the underlying SAT solver, CryptoMinisat. This observation motivated us to ask: Can we design an efficient underlying CNF-XOR SAT solver that can take advantage of the structure of hashing-based algorithms and would this lead to an efficient approximate model counter? The primary contribution of this paper is an affirmative answer to the above question. We present a novel architecture, called BIRD, to handle CNF-XOR formulas arising from hashing-based techniques. The resulting hashing-based approximate model counter, called ApproxMC3, employs the BIRD framework in its underlying SAT solver, CryptoMinisat. To the best of our knowledge, we conducted the most comprehensive study of evaluation performance of counting algorithms involving 1896 benchmarks with computational effort totaling 86400 computational hours. Our experimental evaluation demonstrates significant runtime performance improvement for ApproxMC3 over ApproMC2. In particular, we solve 648 benchmarks more than ApproMC2, the state of the art approximate model counter and for all the formulas where both ApproMC2 and ApproxMC3 did not timeout and took more than 1 seconds, the mean speedup is 284.40 -- more than two orders of magnitude.
[c23]
On testing of Uniform Samplers
Sourav Chakraborty and Kuldeep S. Meel
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2019.
Recent years have seen an unprecedented adoption of artificial intelligence in wide variety of applications ranging from medical diagnosis, automobile, security to aircraft collision avoidance. Probabilistic reasoning is a key component of such modern artificial intelligence systems. Sampling techniques form the core of the state of the art probabilistic reasoning systems. In contrast to testing for deterministic programs, where one trace is sufficient to prove the existence of a bug; such is not the case for samplers as one sample is typically not sufficient to prove non-conformity of the sampler to the desired distribution. This makes one wonder: whether it is possible to design testing methodology to test whether a sampler under test generates samples close to a given distribution. The primary contribution of this paper is a positive answer to the above question: We design, to the best of our knowledge, the first algorithmic framework, Barbarik, to test whether the distribution generated by $\varepsilon-$close or $\eta-$far from the uniform distribution. In contrast to the sampling techniques that require an exponential or sub-exponential number of samples for sampler whose support can be represented by $n$ bits, Barbarik requires only $\mathcal{O}(1/(\eta-\varepsilon)^2)$ samples. We present a prototype implementation of Barbarik and use it to test three state of the art uniform samplers over the support defined by combinatorial constraints. Barbarik is able to provide a certificate of uniformity to one sampler and demonstrate non-uniformity for the other two samplers.
[c22]
On the Hardness of Probabilistic Inference Relaxations
Supratik Chakraborty, Kuldeep S. Meel, and Moshe Y. Vardi
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2019.
Probabilistic inference is increasingly being used in applications that compute with uncertain data. A promising approach to inference that has attracted recent attention exploits its inter-reducibility with model counting. Since probabilistic inference and model counting are #P-complete, various relaxations are used in practice, with the hope that these relaxations lend themselves to efficient computation while also providing rigorous approximation guarantees. In this paper, we show that contrary to commonly held belief, several relaxations used in the probabilistic inference literature do not really lead to computational efficiency in a complexity theoretic sense. Our arguments proceed by showing the corresponding relaxed notions of counting to be computationally hard. We argue that approximate counting (and hence, inference) with multiplicative tolerance and probabilistic guarantees of correctness is the only class of relaxations that provably simplifies the problem, given access to an NP-oracle. Finally, we show that for applications that compare probability estimates with a threshold, a new notion of relaxation with gaps between low and high thresholds can be used. This new relaxation allows efficient decision making in practice, given access to an NP-oracle, while also bounding the approximation error.
[c21]
IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules
Bishwamittra Ghosh and Kuldeep S. Meel
Proceedings of AAAI/ACM Conference on AI, Ethics, and Society(AIES), 2019.
The wide adoption of machine learning in the critical domains such as medical diagnosis, law, education had propelled the need for interpretable techniques due to the need for end user to understand the reasoning behind decisions due to learning systems. The computational intractability of interpretable learning led practitioners to design heuristic techniques, which fail to provide sound handles to tradeoff accuracy and interpretability. Motivated by the success of Max-SAT solvers over the past decade, recently MaxSAT-based approach, called MLIC, was proposed that seeks to reduce the problem of learning interpretable rules expressed in Conjunctive Normal Form (CNF) to a MaxSAT query. While MLIC was shown to achieve accuracy similar to that of other state of the art black-box classifiers while generating small interpretable CNF formulas, the runtime performance of MLIC is significantly lagging and renders approach unusable in practice. In this context, authors raised the question: Is it possible to achieve the best of both worlds, i.e. a sound framework for interpretable learning that can take advantage of MaxSAT solvers while scaling to real-world instances? In this paper, we take a step towards answering the above question in affirmation. We propose an incremental approach to Max-SAT based framework that achieves scalable runtime performance via partition-based training methodology. Extensive experiments on benchmarks arising from UCI repository demonstrate that IMLI achieves up to three orders of magnitude runtime improvement without loss of accuracy and interpretability.
2018
[c20]
Knowledge Compilation meets Uniform Sampling
Shubham Sharma, Rahul Gupta, Subhajit Roy, and Kuldeep S. Meel
Proceedings of International Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR), 2018.
Uniform sampling has drawn diverse applications in programming languages and software engineering, like in constrained-random verification (CRV), constrained-fuzzing and bug synthesis. The effectiveness of these applications depend on the uniformity of test stimuli generated from a given set of constraints. Despite significant progress over the past few years, the performance of the state of the art techniques still falls short of those of heuristic methods employed in the industry which sacrifice either uniformity or scalability when generating stimuli. In this paper, we propose a new approach to the uniform generation that builds on recent progress in knowledge compilation. The primary contribution of this paper is marrying knowledge compilation with uniform sampling: our algorithm, KUS, employs the state-of-the-art knowledge compilers to first compile constraints into d-DNNF form, and then, generates samples by making two passes over the compiled representation. We show that KUS is able to significantly outperform existing state-of-the-art algorithms, SPUR and UniGen2, by up to 3 orders of magnitude in terms of runtime while achieving a geometric speedup of 1.7 and 8.3 over SPUR and UniGen2 respectively. Also, KUS achieves a lower PAR-2 score, around 0.82x that of SPUR and 0.38x that of UniGen2. Furthermore, KUS achieves speedups of up to 3 orders of magnitude for incremental sampling. The distribution generated by KUS is statistically indistinguishable from that generated by an ideal uniform sampler. Moreover, KUS is almost oblivious to the number of samples requested.
[c19]
MLIC: A MaxSAT-Based framework for learning interpretable classification rules
Dmitry Malioutov and Kuldeep S. Meel
Proceedings of International Conference on Constraint Programming (CP), 2018.
The wide adoption of machine learning approaches in the industry, government, medicine and science has renewed the interest in interpretable machine learning: many decisions are too important to be delegated to black-box techniques such as deep neural networks or kernel SVMs. Historically, problems of learning interpretable classifiers, including classification rules or decision trees, have been approached by greedy heuristic methods as essentially all the exact optimization formulations are NP-hard. Our primary contribution is a MaxSAT-based framework, called MLIC, which allows principled search for interpretable classification rules expressible in propositional logic. Our approach benefits from the revolutionary advances in the constraint satisfaction community to solve large-scale instances of such problems. In experimental evaluations over a collection of benchmarks arising from practical scenarios, we demonstrate its effectiveness: we show that the formulation can solve large classification problems with tens or hundreds of thousands of examples and thousands of features, and to provide a tunable balance of accuracy vs. interpretability. Furthermore, we show that in many problems interpretability can be obtained at only a minor cost in accuracy. The primary objective of the paper is to show that recent advances in the MaxSAT literature make it realistic to find optimal (or very high quality near-optimal) solutions to large-scale classification problems. The key goal of the paper is to excite researchers in both interpretable classification and in the CP community to take it further and propose richer formulations, and to develop bespoke solvers attuned to the problem of interpretable ML.
[c18]
Not All FPRASs are Equal: Demystifying FPRASs for DNF-Counting
Kuldeep S. Meel, Aditya A. Shrotri, and Moshe Y. Vardi
Proceedings of International Conference on Constraint Programming (CP), 2018.
Invited to Constraints journal
The problem of counting the number of solutions of a DNF formula, also called #DNF, is a fundamental problem in artificial intelligence with applications in diverse domains ranging from network reliability to probabilistic databases. Owing to the intractability of the exact variant, efforts have focused on the design of approximate techniques for #DNF. Consequently, several Fully Polynomial Randomized Approximation Schemes (FPRASs) based on Monte Carlo techniques have been proposed. Recently, it was discovered that hashing-based techniques too lend themselves to FPRASs for #DNF. Despite significant improvements, the complexity of the hashing-based FPRAS is still worse than that of the best Monte Carlo FPRAS by polylog factors. Two questions were left unanswered in previous works: Can the complexity of the hashing-based techniques be improved? How do the various approaches stack up against each other empirically? In this paper, we first propose a new search procedure for the hashing-based FPRAS that removes the polylog factors from its time complexity. We then present the first empirical study of runtime behavior of different FPRASs for #DNF. The result of our study produces a nuanced picture. First of all, we observe that there is no single best algorithm that outperforms all others for all classes of formulas and input parameters. Second, we observe that the algorithm with the worst time complexity, solves the largest number of benchmarks.
[c17]
Scalable Approximation of Quantitative Information Flow in Programs
Fabrizio Biondi, Michael Enescu, Annelie Heuser, Axel Legay, Kuldeep S. Meel, and Jean Quilbeuf
Proceedings of International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI), 2018.
Quantitative information flow measurement techniques have been proven to be successful in detecting leakage of confidential information from programs. Modern approaches are based on formal methods, relying on program analysis to produce a SAT formula representing the program's behavior, and model counting to measure the possible information flow. However, while program analysis scales to large codebases like the OpenSSL project, the formulas produced are too complex for analysis with precise model counting. In this paper we use the approximate model counter ApproxMC2 to quantify information flow. We show that ApproxMC2 is able to provide a large performance increase for a very small loss of precision, allowing the analysis of SAT formulas produced from complex code. We call the resulting technique ApproxFlow and test it on a large set of benchmarks against the state of the art. Finally, we show that ApproxFlow can evaluate the leakage incurred by the Heartbleed OpenSSL bug, contrarily to the state of the art.
2017
[c16]
On Hashing-Based Approaches to Approximate DNF-Counting
Kuldeep S. Meel, Aditya A. Shrotri, and Moshe Y. Vardi
Proceedings of IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 2017.
Propositional model counting is a fundamental problem in artificial intelligence with a wide variety of applications, such as probabilistic inference, decision making under uncertainty, and probabilistic databases. Consequently, the problem is of theoretical as well as practical interest. When the constraints are expressed as DNF formulas, Monte Carlo-based techniques have been shown to provide a fully polynomial randomized approximation scheme (FPRAS). For CNF constraints, hashing-based approximation techniques have been demonstrated to be highly successful. Furthermore, it was shown that hashing-based techniques also yield an FPRAS for DNF counting without usage of Monte Carlo sampling. Our analysis, however, shows that the proposed hashing-based approach to DNF counting provides poor time complexity compared to the Monte Carlo-based DNF counting techniques. Given the success of hashing-based techniques for CNF constraints, it is natural to ask: Can hashing-based techniques provide an efficient FPRAS for DNF counting? In this paper, we provide a positive answer to this question. To this end, we introduce two novel algorithmic techniques: Symbolic Hashing and Stochastic Cell Counting, along with a new hash family of Row-Echelon hash functions. These innovations allow us to design a hashing-based FPRAS for DNF counting of similar complexity as that of prior works. Furthermore, we expect these techniques to have potential applications beyond DNF counting.
[c15]
The Hard Problems Are Almost Everywhere For Random CNF-XOR Formulas
Jeffrey Dudek, Kuldeep S. Meel, and Moshe Y. Vardi
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2017.
Recent universal-hashing based approaches to sampling and counting crucially depend on the runtime performance of \SAT~solvers on formulas expressed as the conjunction of both CNF constraints and variable-width XOR constraintsints (known as CNF-XOR formulas). In this paper, we present the first study of the runtime behavior of \SAT~solvers equipped with XOR-reasoning techniques on random CNF-XOR formulas. We empirically demonstrate that a state-of-the-art \SAT~solver scales exponentially on random CNF-XOR formulas across a wide range of XOR-clause densities, peaking around the empirical phase-transition location. On the theoretical front, we prove that the solution space of a random CNF-XOR formula `shatters' at \emph{all} nonzero XOR-clause densities into well-separated components, similar to the behavior seen in random CNF formulas known to be difficult for many \SAT-solving algorithms.
[c14]
Counting-Based Reliability Estimation for Power-Transmission Grids
Leonardo Duenas-Osorio, Kuldeep S. Meel, Roger Paredes, and Moshe Y. Vardi
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2017.
Modern society is increasingly reliant on the functionality of infrastructure facilities and utility services. Consequently, there has been surge of interest in the problem of quantification of system reliability, which is known to be #P-complete. Reliability also contributes to the resilience of systems, so as to effectively make them bounce back after contingencies. Despite diverse progress, most techniques to estimate system reliability and resilience remain computationally expensive. In this paper, we investigate how recent advances in hashing-based approaches to counting can be exploited to improve computational techniques for system reliability. The primary contribution of this paper is a novel framework, RelNet, that provides provably approximately correct (PAC) estimates for arbitrary networks. We then apply RelNet to ten real world power transmission grids across different cities in the U.S. and are able to obtain, to the best of our knowledge, the first theoretically sound a priori estimates of reliability between several pairs of nodes of interest. Such estimates will help managing uncertainty and support rational decision making for community resilience.
2016
[c13]
Algorithmic Improvements in Approximate Counting for Probabilistic Inference: From Linear to Logarithmic SAT Calls
Supratik Chakraborty, Kuldeep S. Meel, and Moshe Y. Vardi
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2016.
Probabilistic inference via model counting has emerged as a scalable technique with strong formal guarantees, thanks to recent advances in hashing-based approximate counting. State-of-the-art hashing-based counting algorithms use an {\NP} oracle, such that the number of oracle invocations grows linearly in the number of variables n in the input constraint. We present a new approach to hashing-based approximate model counting in which the number of oracle invocations grows logarithmically in $n$, while still providing strong theoretical guarantees. Our experiments show that the new approach outperforms state-of-the-art techniques for approximate counting by 1-2 orders of magnitude in running time.
[c12]
Combining the k-CNF and XOR Phase-Transitions
Jeffrey Dudek, Kuldeep S. Meel, and Moshe Y. Vardi
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2016.
The runtime performance of modern SAT solvers on random k-CNF formulas is deeply connected with the `phase-transition' phenomenon seen empirically in the satisfiability of random k-CNF formulas. Recent universal hashing-based approaches to sampling and counting crucially depend on the runtime performance of SAT solvers on formulas expressed as the conjunction of both k-CNF and XOR constraints (known as k-CNF-XOR formulas), but the behavior of random k-CNF-XOR formulas is unexplored in prior work. In this paper, we present the first study of the satisfiability of random k-CNF-XOR formulas. We show empirical evidence of a surprising phase-transition that follows a linear trade-off between k-CNF and XOR constraints. Furthermore, we prove that a phase-transition for k-CNF-XOR formulas exists for k = 2 and (when the number of k-CNF constraints is small) for k > 2.
[c11]
Approximate Probabilistic Inference via Word-Level Counting
Supratik Chakraborty, Kuldeep S. Meel, Rakesh Mistry, and Moshe Y. Vardi
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2016.
Hashing-based model counting has emerged as a promising approach for large-scale probabilistic inference on graphical models. A key component of these techniques is the use of xor-based 2-universal hash functions that operate over Boolean domains. Many counting problems arising in probabilistic inference are, however, naturally encoded over fi- nite discrete domains. Techniques based on bit-level (or Boolean) hash functions require these problems to be propositionalized, making it impossible to leverage the remarkable progress made in SMT (Satisfiability Modulo Theory) solvers that can reason directly over words (or bit-vectors). In this work, we present the first approximate model counter that uses word-level hashing functions, and can directly leverage the power of sophisticated SMT solvers. Empirical evaluation over an extensive suite of benchmarks demonstrates the promise of the approach.
[c10]
Automatic Data Layout Generation and Kernel Mapping for CPU+GPU Architectures
Deepak Majeti, Kuldeep S. Meel, Raj Barik, and Vivek Sarkar
Proceedings of International Conference on Compiler Construction (CC), 2016.
The ubiquity of hybrid CPU+GPU architectures has led to renewed interest in automatic data layout generation owing to the fact that data layouts have a large impact on performance, and that different data layouts yield the best performance on CPUs vs. GPUs. Unfortunately, current programming models still fail to provide an effective solution to the problem of automatic data layout generation for CPU+GPU processors. Specifically, the interaction among whole-program data layout optimizations, data movement optimizations, and mapping of kernels across heterogeneous cores poses a major challenge to current programming systems. In this paper, we introduce a novel two-level hierarchical formulation of the data layout and kernel mapping problem for modern heterogeneous architectures. The top level formulation targets data layouts and kernel mapping for the entire program for which we provide a polynomial- time solution using a graph-based shortest path algorithm that uses the data layouts for the code regions (sections) for a given processor computed in the bottom level formulation. The bottom level formulation deals with the data layout problem for a parallel code region on a given processor, which is NP-Hard, and we provide a greedy algorithm that uses an affinity graph to obtain approximate solutions. We have implemented this data layout transformation in the new Heterogeneous Habanero-C (H2C) parallel programming framework and propose performance models to characterize the data layout impact on both the CPU and GPU. Our data layout framework shows significant performance improvements of up to 2.9 (geometric mean 1.5) on a multicore CPU+GPU compared to the manually specified layouts for a set of parallel programs running on a heterogeneous platform consisting of an Intel Xeon CPU and a NVIDIA GPU. Further, our framework also shows performance improvements of up to 2.7 (geometric mean 1.6) on just the multicore CPU, demonstrating the applicability of our approach to both heterogeneous and homogeneous hardware platforms.
[c9]
Design and Verification of Distributed Phasers
Karthik, Murthy, Sri Raj, Paul, Kuldeep S., Meel, Tiago Cogumbreiro, and John Mellor-Crummey
Proceedings of International European Conference on Parallel and Distributed Computing (Euro-Par), 2016.
A phaser is an expressive synchronization construct that unifies collective and point-to-point coordination with dynamic registration of parallel tasks. Each task can participate in a phaser as a signaler, a waiter, or both. The participants in a phaser may change over time as tasks are added and deleted. In this paper, we present a highly concurrent and scalable design of phasers for a distributed memory environment. Our design for a distributed phaser employs a pair of skip lists augmented with the ability to collect and propagate synchronization signals. To enable a high degree of concurrency, addition and deletion of participant tasks are performed in two phases: a "fast single-link-modify" step followed by multiple hand-over-hand "lazy multi-link-modify" steps. Verifying highly-concurrent protocols is difficult. We analyze our design for a distributed phaser using the SPIN model checker. A straight-forward approach to model checking a distributed phaser operation requires an infeasibly large state space. To address this issue, we employ a novel "message-based" model checking scheme to enable a non- approximate complete model checking of our phaser design. We guarantee the semantic properties of phaser operations by ensuring that a set of linear temporal logic formulae are valid during model checking. We also present complexity analysis of the cost of synchronization and structural operations.
2015
[c8]
On Computing Minimal Independent Support and Its Applications to Sampling and Counting
Alexander Ivrii, Sharad Malik, Kuldeep S. Meel, and Moshe Y. Vardi
Proceedings of International Conference on Constraint Programming (CP), 2015.
Best Student Paper Award
Constrained sampling and counting are two fundamental problems arising in domains ranging from artificial intelligence and security, to hardware and software testing. Recent approaches to approximate solutions for these problems rely on employing SAT solvers and universal hash functions that are typically encoded as XOR constraints of length n/2 for an input formula with n variables. As the runtime performance of SAT solvers heavily depends on the length of XOR constraints, recent research effort has been focused on reduction of length of XOR constraints. Consequently, a notion of Independent Support was proposed, and it was shown that constructing XORs over independent support (if known) can lead to a significant reduction in the length of XOR constraints without losing the theoretical guarantees of sampling and counting algorithms. In this paper, we present the first algorithmic procedure (and a corresponding tool, called MIS) to determine minimal independent support for a given CNF formula by employing a reduction to group minimal unsatisfiable subsets (GMUS). By utilizing minimal independent supports computed by MIS, we provide new tighter bounds on the length of XOR constraints for constrained counting and sampling. Furthermore, the universal hash functions constructed from independent supports computed by MIS provide two to three orders of magnitude performance improvement in state-of-the-art constrained sampling and counting tools, while still retaining theoretical guarantees.
[c7]
From Weighted to Unweighted Model Counting
Supratik Chakraborty, Dror Fried, Kuldeep S. Meel, and Moshe Y. Vardi
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2015.
The recent surge of interest in reasoning about probabilistic graphical models has led to the development of various techniques for probabilistic reasoning. Of these, techniques based on weighted model counting are particularly interesting since they can potentially leverage recent advances in unweighted model counting and in propositional satisfiability solving. In this paper, we present a new approach to weighted model counting via reduction to unweighted model counting. Our reduction, which is polynomial-time and preserves the normal form (CNF/DNF) of the input formula, allows us to exploit advances in unweighted model counting to solve weighted model counting instances. Experiments with weighted model counters built using our reduction indicate that these counters performs much better than a state-of-the-art weighted model counter.
[c6]
On Parallel Scalable Uniform SAT Witness Generator
Supratik Chakraborty, Daniel J. Fremont, Kuldeep S. Meel, Sanjit A. Seshia, and Moshe Y. Vardi
Proceedings of Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 2015.
Constrained-random verification (CRV) is widely used in industry for validating hardware designs. The effectiveness of CRV depends on the uniformity of test stimuli generated from a given set of constraints. Most existing techniques sacrifice either uniformity or scalability when generating stimuli. While recent work based on random hash functions has shown that it is possible to generate almost uniform stimuli from constraints with 100,000+ variables, the performance still falls short of today's industrial requirements. In this paper, we focus on pushing the performance frontier of uniform stimulus generation further. We present a random hashing-based, easily parallelizable algorithm, UniGen2, for sampling solutions of propositional constraints. UniGen2 provides strong and relevant theoretical guarantees in the context of CRV, while also offering significantly improved performance compared to existing almost-uniform generators. Experiments on a diverse set of benchmarks show that UniGen2 achieves an average speedup of about 20X over a state-of-the-art sampling algorithm, even when running on a single core. Moreover, experiments with multiple cores show that UniGen2 achieves a near-linear speedup in the number of cores, thereby boosting performance even further.
2014
[c5]
ADHA: Automatic Data layout framework for Heterogeneous Architectures
Deepak Majeti, Kuldeep S. Meel, Raj Barik, and Vivek Sarkar
Proceedings of Parallel Architecture and Compilation Techniques (PACT), 2014.
Data layouts play a crucial role in determining the performance of a given application running on a given architecture. Existing parallel programming frameworks for both multicore and heterogeneous systems leave the onus of selecting a data layout to the programmer. Therefore, shifting the burden of data layout selection to optimizing compilers can greatly enhance programmer productivity and application performance. In this work, we introduce ADHA: a two-level hierarchal formulation of the data layout problem for modern heterogeneous architectures. We have created a reference implementation of ADHA in the Heterogeneous Habanero-C (H2C) parallel programming system. ADHA shows significant performance benefits of up to 6.92X compared to manually specified layouts for two benchmark programs running on a CPU+GPU heterogeneous platform.
[c4]
Distribution-Aware Sampling and Weighted Model Counting for SAT
Supratik Chakraborty, Daniel J. Fremont, Kuldeep S. Meel, Sanjit A. Seshia, and Moshe Y. Vardi
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2014.
Given a CNF formula and a weight for each assignment of values to variables, two natural problems are weighted model counting and distribution-aware sampling of satisfying assignments. Both problems have a wide variety of important applications. Due to the inherent complexity of the exact versions of the problems, interest has focused on solving them approximately. Prior work in this area scaled only to small problems in practice, or failed to provide strong theoretical guarantees, or employed a computationally-expensive maximum a posteriori probability (MAP) oracle that assumes prior knowledge of a factored representation of the weight distribution. We present a novel approach that works with a black-box oracle for weights of assignments and requires only an {\NP}-oracle (in practice, a SAT-solver) to solve both the counting and sampling problems. Our approach works under mild assumptions on the distribution of weights of satisfying assignments, provides strong theoretical guarantees, and scales to problems involving several thousand variables. We also show that the assumptions can be significantly relaxed while improving computational efficiency if a factored representation of the weights is known.
[c3]
Balancing Scalability and Uniformity in SAT-Witness Generator
Supratik Chakraborty, Kuldeep S. Meel, and Moshe Y. Vardi
Proceedings of Design Automation Conference (DAC), 2014.
Constrained-random simulation is the predominant approach used in the industry for functional verification of complex digital designs. The effectiveness of this approach depends on two key factors: the quality of constraints used to generate test vectors, and the randomness of solutions generated from a given set of constraints. In this paper, we focus on the second problem, and present an algorithm that significantly improves the state-of-the-art of (almost-)uniform generation of solutions of large Boolean constraints. Our algorithm provides strong theoretical guarantees on the uniformity of generated solutions and scales to problems involving hundreds of thousands of variables.
2013
[c2]
A Scalable Approximate Model Counter
Supratik Chakraborty, Kuldeep S. Meel, and Moshe Y. Vardi
Proceedings of International Conference on Constraint Programming (CP), 2013.
Propositional model counting (#SAT), i.e., counting the number of satisfying assignments of a propositional formula, is a problem of significant theoretical and practical interest. Due to the inherent complexity of the problem, approximate model counting, which counts the number of satisfying assignments to within given tolerance and confi- dence level, was proposed as a practical alternative to exact model counting. Yet, approximate model counting has been studied essentially only theoretically. The only reported implementation of approximate model counting, due to Karp and Luby, worked only for DNF formulas. A few existing tools for CNF formulas are bounding model counters; they can handle realistic problem sizes, but fall short of providing counts within given tolerance and confidence, and, thus, are not approximate model counters. We present here a novel algorithm, as well as a reference implementation, that is the first scalable approximate model counter for CNF formulas. The algorithm works by issuing a polynomial number of calls to a SAT solver. Our tool, ApproxMC, scales to formulas with tens of thousands of variables. Careful experimental comparisons show that ApproxMC reports, with high confidence, bounds that are close to the exact count, and also succeeds in reporting bounds with small tolerance and high confidence in cases that are too large for computing exact model counts.
[c1]
A Scalable and Nearly Uniform Generator of SAT Witnesses
Supratik Chakraborty, Kuldeep S. Meel, and Moshe Y. Vardi
Proceedings of International Conference on Computer-Aided Verification (CAV), 2013.
Functional verification constitutes one of the most challenging tasks in the development of modern hardware systems, and simulation-based verification techniques dominate the functional verification landscape. A dominant paradigm in simulation-based verification is directed random testing, where a model of the system is simulated with a set of random test stimuli that are uniformly or near-uniformly distributed over the space of all stimuli satisfying a given set of constraints. Uniform or near-uniform generation of solutions for large constraint sets is therefore a problem of theoretical and practical interest. For Boolean constraints, prior work offered heuristic approaches with no guarantee of performance, and theoretical approaches with proven guarantees, but poor performance in practice. We offer here a new approach with theoretical performance guarantees and demonstrate its practical utility on large constraint sets.

Journal Papers (Refereed and Archived)

2019
[j3]
Principled network reliability approximation: A counting-based approach.
Roger Paredes, Leonardo Duenas-Osorio, Kuldeep S. Meel, and Moshe Y. Vardi
Journal of Reliability Engineering and System Safety(RESS), 2019.
As engineered systems expand, become more interdependent, and operate in real-time, reliability assessment is indispensable to support investment and decision making. However, network reliability problems are known to be #P-complete, a computational complexity class largely believed to be intractable. The computational intractability of network reliability motivates our quest for reliable approximations. Based on their theoretical foundations, available methods can be grouped as follows: (i) exact or bounds, (ii) guarantee-less sampling, and (iii) probably approximately correct (PAC). Group (i) is well regarded due to its useful byproducts, but it does not scale in practice. Group (ii) scales well and verifies desirable properties, such as the bounded relative error, but it lacks error guarantees. Group (iii) is of great interest when precision and scalability are required, as it harbors computationally feasible approximation schemes with PAC-guarantees. We give a comprehensive review of classical methods before introducing modern techniques and our developments. We introduce K-RelNet, an extended counting-based estimation method that delivers PAC-guarantees for the K-terminal reliability problem. Then, we test methods' performance using various benchmark systems. We highlight the range of application of algorithms and provide the foundation for future resilience engineering as it increasingly necessitates methods for uncertainty quantification in complex systems.
[j2]
Not All FPRASs are Equal: Demystifying FPRASs for DNF-Counting
Kuldeep S. Meel, Aditya A. Shrotri, and Moshe Y. Vardi
Constraints, 2019.
The problem of counting the number of solutions of a DNF formula, also called #DNF, is a fundamental problem in artificial intelligence with applications in diverse domains ranging from network reliability to probabilistic databases. Owing to the intractability of the exact variant, efforts have focused on the design of approximate techniques for #DNF. Consequently, several Fully Polynomial Randomized Approximation Schemes (FPRASs) based on Monte Carlo techniques have been proposed. Recently, it was discovered that hashing-based techniques too lend themselves to FPRASs for #DNF. Despite significant improvements, the complexity of the hashing-based FPRAS is still worse than that of the best Monte Carlo FPRAS by polylog factors. Two questions were left unanswered in previous works: Can the complexity of the hashing-based techniques be improved? How do the various approaches stack up against each other empirically? In this paper, we first propose a new search procedure for the hashing-based FPRAS that removes the polylog factors from its time complexity. We then present the first empirical study of runtime behavior of different FPRASs for #DNF. The result of our study produces a nuanced picture. First of all, we observe that there is no single best algorithm that outperforms all others for all classes of formulas and input parameters. Second, we observe that the algorithm with the worst time complexity, solves the largest number of benchmarks.
2016
[j1]
On Computing Minimal Independent Support and Its Applications to Sampling and Counting
Alexander Ivrii, Sharad Malik, Kuldeep S. Meel, and Moshe Y. Vardi
Constraints 21(1), 2016.
Constrained sampling and counting are two fundamental problems arising in domains ranging from artificial intelligence and security, to hardware and software testing. Recent approaches to approximate solutions for these problems rely on employing SAT solvers and universal hash functions that are typically encoded as XOR constraints of length n/2 for an input formula with n variables. As the runtime performance of SAT solvers heavily depends on the length of XOR constraints, recent research effort has been focused on reduction of length of XOR constraints. Consequently, a notion of Independent Support was proposed, and it was shown that constructing XORs over independent support (if known) can lead to a significant reduction in the length of XOR constraints without losing the theoretical guarantees of sampling and counting algorithms. In this paper, we present the first algorithmic procedure (and a corresponding tool, called MIS) to determine minimal independent support for a given CNF formula by employing a reduction to group minimal unsatisfiable subsets (GMUS). By utilizing minimal independent supports computed by MIS, we provide new tighter bounds on the length of XOR constraints for constrained counting and sampling. Furthermore, the universal hash functions constructed from independent supports computed by MIS provide two to three orders of magnitude performance improvement in state-of-the-art constrained sampling and counting tools, while still retaining theoretical guarantees.

Workshop Papers (Refereed and Archieved)

2016
[w1]
Constrained Sampling and Counting: Universal Hashing meets SAT Solving
Kuldeep S. Meel, Moshe Y. Vardi, Supratik Chakraborty, Daniel J. Fremont, Sanjit A. Seshia, Dror Fried, Alexander Ivrii, and Sharad Malik
Proceedings of Workshop on Beyond NP(BNP), 2016.
Constrained sampling and counting are two fundamental problems in artificial intelligence with a diverse range of applications, spanning probabilistic reasoning and planning to constrained-random verification. While the theory of these problems was thoroughly investigated in the 1980s, prior work either did not scale to industrial size instances or gave up correctness guarantees to achieve scalability. Recently, we proposed a novel approach that combines universal hashing and SAT solving and scales to formulas with hundreds of thousands of variables without giving up correctness guarantees. This paper provides an overview of the key ingredients of the approach and discusses challenges that need to be overcome to handle larger real-world instances.