2018 
Scaling Discrete Integration and Sampling: Foundations and Challenges Copresented with Supratik Chakraborty International Joint Conference on Artificial Intelligence (IJCAI 2018) 
2017 
Discrete Sampling and Integration for the AI Practitioner
Copresented with Supratik Chakraborty and Moshe Y. Vardi AAAI Conference on Artificial Intelligence (AAAI 2017) 
2016 
Discrete Sampling and Integration in High Dimensional Spaces
CoPresented with Supratik Chakraborty and Moshe Y. Vardi Conference on Uncertainity in Artificial Intelligence (UAI 2016) 
2017 
Constrained Counting and Sampling: Bridging the Gap between Theory and Practice
PhD Thesis, Rice University, September 2017 Winner of 2018 Ralph Budd Award for Best Engineering Thesis Honorable mention, 2018 ACP Doctoral Dissertation Award 
2014 
Sampling Techniques for Boolean Satisfiability
Masters Thesis, Rice University, April 2014 Winner of 2014 VCLA (Vienna Center of Logic and Algorithms) Outstanding Masters Thesis Announcement 
2018  
[c23] 
Knowledge Compilation meets Uniform Sampling
Proceedings of International Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR), 2018.
Uniform sampling has drawn diverse applications in programming languages and software engineering, like in constrainedrandom verification (CRV), constrainedfuzzing and bug synthesis. The effectiveness of these applications depend on the uniformity of test stimuli generated from a given set of constraints. Despite significant progress over the past few years, the performance of the state of the art techniques still falls short of those of heuristic methods employed in the industry which sacrifice either uniformity or scalability when generating stimuli. In this paper, we propose a new approach to the uniform generation that builds on recent progress in knowledge compilation. The primary contribution of this paper is marrying knowledge compilation with uniform sampling: our algorithm, KUS, employs the stateoftheart knowledge compilers to first compile constraints into dDNNF form, and then, generates samples by making two passes over the compiled representation. We show that KUS is able to significantly outperform existing stateoftheart algorithms, SPUR and UniGen2, by up to 3 orders of magnitude in terms of runtime while achieving a geometric speedup of 1.7 and 8.3 over SPUR and UniGen2 respectively. Also, KUS achieves a lower PAR2 score, around 0.82x that of SPUR and 0.38x that of UniGen2. Furthermore, KUS achieves speedups of up to 3 orders of magnitude for incremental sampling. The distribution generated by KUS is statistically indistinguishable from that generated by an ideal uniform sampler. Moreover, KUS is almost oblivious to the number of samples requested.

[c22] 
MLIC: A MaxSATBased framework for learning interpretable classification rules
Proceedings of International Conference on Constraint Programming (CP), 2018.
The wide adoption of machine learning approaches in the industry, government, medicine and science has renewed the interest in interpretable machine learning: many decisions are too important to be delegated to blackbox techniques such as deep neural networks or kernel SVMs. Historically, problems of learning interpretable classifiers, including classification rules or decision trees, have been approached by greedy heuristic methods as essentially all the exact optimization formulations are NPhard. Our primary contribution is a MaxSATbased framework, called MLIC, which allows principled search for interpretable classification rules expressible in propositional logic. Our approach benefits from the revolutionary advances in the constraint satisfaction community to solve largescale instances of such problems. In experimental evaluations over a collection of benchmarks arising from practical scenarios, we demonstrate its effectiveness: we show that the formulation can solve large classification problems with tens or hundreds of thousands of examples and thousands of features, and to provide a tunable balance of accuracy vs. interpretability. Furthermore, we show that in many problems interpretability can be obtained at only a minor cost in accuracy. The primary objective of the paper is to show that recent advances in the MaxSAT literature make it realistic to find optimal (or very high quality nearoptimal) solutions to largescale classification problems. The key goal of the paper is to excite researchers in both interpretable classification and in the CP community to take it further and propose richer formulations, and to develop bespoke solvers attuned to the problem of interpretable ML.

[c21] 
Not All FPRASs are Equal: Demystifying FPRASs for DNFCounting
Proceedings of International Conference on Constraint Programming (CP), 2018. Invited to Constraints journal
The problem of counting the number of solutions of a DNF formula, also called #DNF, is a fundamental problem in artificial intelligence with applications in diverse domains ranging from network reliability to probabilistic databases. Owing to the intractability of the exact variant, efforts have focused on the design of approximate techniques for #DNF. Consequently, several Fully Polynomial Randomized Approximation Schemes (FPRASs) based on Monte Carlo techniques have been proposed. Recently, it was discovered that hashingbased techniques too lend themselves to FPRASs for #DNF. Despite significant improvements, the complexity of the hashingbased FPRAS is still worse than that of the best Monte Carlo FPRAS by polylog factors. Two questions were left unanswered in previous works: Can the complexity of the hashingbased techniques be improved? How do the various approaches stack up against each other empirically? In this paper, we first propose a new search procedure for the hashingbased FPRAS that removes the polylog factors from its time complexity. We then present the first empirical study of runtime behavior of different FPRASs for #DNF. The result of our study produces a nuanced picture. First of all, we observe that there is no single best algorithm that outperforms all others for all classes of formulas and input parameters. Second, we observe that the algorithm with the worst time complexity, solves the largest number of benchmarks.

[c20] 
Scalable Approximation of Quantitative Information Flow in Programs
Proceedings of International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI), 2018.
Quantitative information flow measurement techniques have been proven to be successful in detecting leakage of confidential information from programs. Modern approaches are based on formal methods, relying on program analysis to produce a SAT formula representing the program's behavior, and model counting to measure the possible information flow. However, while program analysis scales to large codebases like the OpenSSL project, the formulas produced are too complex for analysis with precise model counting. In this paper we use the approximate model counter ApproxMC2 to quantify information flow. We show that ApproxMC2 is able to provide a large performance increase for a very small loss of precision, allowing the analysis of SAT formulas produced from complex code. We call the resulting technique ApproxFlow and test it on a large set of benchmarks against the state of the art. Finally, we show that ApproxFlow can evaluate the leakage incurred by the Heartbleed OpenSSL bug, contrarily to the state of the art.

2017  
[c19] 
On HashingBased Approaches to Approximate DNFCounting
Proceedings of IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 2017.
Propositional model counting is a fundamental problem in artificial intelligence with a wide variety of applications, such as probabilistic inference, decision making under uncertainty, and probabilistic databases. Consequently, the problem is of theoretical as well as practical interest. When the constraints are expressed as DNF formulas, Monte Carlobased techniques have been shown to provide a fully polynomial randomized approximation scheme (FPRAS). For CNF constraints, hashingbased approximation techniques have been demonstrated to be highly successful. Furthermore, it was shown that hashingbased techniques also yield an FPRAS for DNF counting without usage of Monte Carlo sampling. Our analysis, however, shows that the proposed hashingbased approach to DNF counting provides poor time complexity compared to the Monte Carlobased DNF counting techniques. Given the success of hashingbased techniques for CNF constraints, it is natural to ask: Can hashingbased techniques provide an efficient FPRAS for DNF counting? In this paper, we provide a positive answer to this question. To this end, we introduce two novel algorithmic techniques: Symbolic Hashing and Stochastic Cell Counting, along with a new hash family of RowEchelon hash functions. These innovations allow us to design a hashingbased FPRAS for DNF counting of similar complexity as that of prior works. Furthermore, we expect these techniques to have potential applications beyond DNF counting.

[c18] 
The Hard Problems Are Almost Everywhere For Random CNFXOR Formulas
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2017.
Recent universalhashing based approaches to sampling and counting crucially depend on the runtime performance of \SAT~solvers on formulas expressed as the conjunction of both CNF constraints and variablewidth XOR constraintsints (known as CNFXOR formulas). In this paper, we present the first study of the runtime behavior of \SAT~solvers equipped with XORreasoning techniques on random CNFXOR formulas. We empirically demonstrate that a stateoftheart \SAT~solver scales exponentially on random CNFXOR formulas across a wide range of XORclause densities, peaking around the empirical phasetransition location. On the theoretical front, we prove that the solution space of a random CNFXOR formula `shatters' at \emph{all} nonzero XORclause densities into wellseparated components, similar to the behavior seen in random CNF formulas known to be difficult for many \SATsolving algorithms.

[c17] 
CountingBased Reliability Estimation for PowerTransmission Grids
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2017.
Modern society is increasingly reliant on the functionality of infrastructure facilities and utility services. Consequently, there has been surge of interest in the problem of quantification of system reliability, which is known to be #Pcomplete. Reliability also contributes to the resilience of systems, so as to effectively make them bounce back after contingencies. Despite diverse progress, most techniques to estimate system reliability and resilience remain computationally expensive. In this paper, we investigate how recent advances in hashingbased approaches to counting can be exploited to improve computational techniques for system reliability. The primary contribution of this paper is a novel framework, RelNet, that provides provably approximately correct (PAC) estimates for arbitrary networks. We then apply RelNet to ten real world power transmission grids across different cities in the U.S. and are able to obtain, to the best of our knowledge, the first theoretically sound a priori estimates of reliability between several pairs of nodes of interest. Such estimates will help managing uncertainty and support rational decision making for community resilience.

2016  
[c16] 
Algorithmic Improvements in Approximate Counting for Probabilistic Inference: From Linear to Logarithmic SAT Calls
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2016.
Probabilistic inference via model counting has emerged as a scalable technique with strong formal guarantees, thanks to recent advances in hashingbased approximate counting. Stateoftheart hashingbased counting algorithms use an {\NP} oracle, such that the number of oracle invocations grows linearly in the number of variables n in the input constraint. We present a new approach to hashingbased approximate model counting in which the number of oracle invocations grows logarithmically in $n$, while still providing strong theoretical guarantees. Our experiments show that the new approach outperforms stateoftheart techniques for approximate counting by 12 orders of magnitude in running time.

[c15] 
Combining the kCNF and XOR PhaseTransitions
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2016.
The runtime performance of modern SAT solvers on random kCNF formulas is deeply connected with the `phasetransition' phenomenon seen empirically in the satisfiability of random kCNF formulas. Recent universal hashingbased approaches to sampling and counting crucially depend on the runtime performance of SAT solvers on formulas expressed as the conjunction of both kCNF and XOR constraints (known as kCNFXOR formulas), but the behavior of random kCNFXOR formulas is unexplored in prior work. In this paper, we present the first study of the satisfiability of random kCNFXOR formulas. We show empirical evidence of a surprising phasetransition that follows a linear tradeoff between kCNF and XOR constraints. Furthermore, we prove that a phasetransition for kCNFXOR formulas exists for k = 2 and (when the number of kCNF constraints is small) for k > 2.

[c14] 
Approximate Probabilistic Inference via WordLevel Counting
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2016.
Hashingbased model counting has emerged as a promising approach for largescale probabilistic inference on graphical models. A key component of these techniques is the use of xorbased 2universal hash functions that operate over Boolean domains. Many counting problems arising in probabilistic inference are, however, naturally encoded over fi nite discrete domains. Techniques based on bitlevel (or Boolean) hash functions require these problems to be propositionalized, making it impossible to leverage the remarkable progress made in SMT (Satisfiability Modulo Theory) solvers that can reason directly over words (or bitvectors). In this work, we present the first approximate model counter that uses wordlevel hashing functions, and can directly leverage the power of sophisticated SMT solvers. Empirical evaluation over an extensive suite of benchmarks demonstrates the promise of the approach.

[c13] 
Automatic Data Layout Generation and Kernel Mapping for CPU+GPU Architectures
Proceedings of International Conference on Compiler Construction (CC), 2016.
The ubiquity of hybrid CPU+GPU architectures has led to renewed interest in automatic data layout generation owing to the fact that data layouts have a large impact on performance, and that different data layouts yield the best performance on CPUs vs. GPUs. Unfortunately, current programming models still fail to provide an effective solution to the problem of automatic data layout generation for CPU+GPU processors. Specifically, the interaction among wholeprogram data layout optimizations, data movement optimizations, and mapping of kernels across heterogeneous cores poses a major challenge to current programming systems. In this paper, we introduce a novel twolevel hierarchical formulation of the data layout and kernel mapping problem for modern heterogeneous architectures. The top level formulation targets data layouts and kernel mapping for the entire program for which we provide a polynomial time solution using a graphbased shortest path algorithm that uses the data layouts for the code regions (sections) for a given processor computed in the bottom level formulation. The bottom level formulation deals with the data layout problem for a parallel code region on a given processor, which is NPHard, and we provide a greedy algorithm that uses an affinity graph to obtain approximate solutions. We have implemented this data layout transformation in the new Heterogeneous HabaneroC (H2C) parallel programming framework and propose performance models to characterize the data layout impact on both the CPU and GPU. Our data layout framework shows significant performance improvements of up to 2.9 (geometric mean 1.5) on a multicore CPU+GPU compared to the manually specified layouts for a set of parallel programs running on a heterogeneous platform consisting of an Intel Xeon CPU and a NVIDIA GPU. Further, our framework also shows performance improvements of up to 2.7 (geometric mean 1.6) on just the multicore CPU, demonstrating the applicability of our approach to both heterogeneous and homogeneous hardware platforms.

[c12] 
Design and Verification of Distributed Phasers
Proceedings of International European Conference on Parallel and Distributed Computing (EuroPar), 2016.
A phaser is an expressive synchronization construct that unifies collective and pointtopoint coordination with dynamic registration of parallel tasks. Each task can participate in a phaser as a signaler, a waiter, or both. The participants in a phaser may change over time as tasks are added and deleted. In this paper, we present a highly concurrent and scalable design of phasers for a distributed memory environment. Our design for a distributed phaser employs a pair of skip lists augmented with the ability to collect and propagate synchronization signals. To enable a high degree of concurrency, addition and deletion of participant tasks are performed in two phases: a "fast singlelinkmodify" step followed by multiple handoverhand "lazy multilinkmodify" steps. Verifying highlyconcurrent protocols is difficult. We analyze our design for a distributed phaser using the SPIN model checker. A straightforward approach to model checking a distributed phaser operation requires an infeasibly large state space. To address this issue, we employ a novel "messagebased" model checking scheme to enable a non approximate complete model checking of our phaser design. We guarantee the semantic properties of phaser operations by ensuring that a set of linear temporal logic formulae are valid during model checking. We also present complexity analysis of the cost of synchronization and structural operations.

2015  
[c11] 
On Computing Minimal Independent Support and Its Applications to Sampling and Counting
Proceedings of International Conference on Constraint Programming (CP), 2015. Best Student Paper Award
Constrained sampling and counting are two fundamental problems arising in domains ranging from artificial intelligence and security, to hardware and software testing. Recent approaches to approximate solutions for these problems rely on employing SAT solvers and universal hash functions that are typically encoded as XOR constraints of length n/2 for an input formula with n variables. As the runtime performance of SAT solvers heavily depends on the length of XOR constraints, recent research effort has been focused on reduction of length of XOR constraints. Consequently, a notion of Independent Support was proposed, and it was shown that constructing XORs over independent support (if known) can lead to a significant reduction in the length of XOR constraints without losing the theoretical guarantees of sampling and counting algorithms. In this paper, we present the first algorithmic procedure (and a corresponding tool, called MIS) to determine minimal independent support for a given CNF formula by employing a reduction to group minimal unsatisfiable subsets (GMUS). By utilizing minimal independent supports computed by MIS, we provide new tighter bounds on the length of XOR constraints for constrained counting and sampling. Furthermore, the universal hash functions constructed from independent supports computed by MIS provide two to three orders of magnitude performance improvement in stateoftheart constrained sampling and counting tools, while still retaining theoretical guarantees.

[c10] 
From Weighted to Unweighted Model Counting
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2015.
The recent surge of interest in reasoning about probabilistic graphical models has led to the development of various techniques for probabilistic reasoning. Of these, techniques based on weighted model counting are particularly interesting since they can potentially leverage recent advances in unweighted model counting and in propositional satisfiability solving. In this paper, we present a new approach to weighted model counting via reduction to unweighted model counting. Our reduction, which is polynomialtime and preserves the normal form (CNF/DNF) of the input formula, allows us to exploit advances in unweighted model counting to solve weighted model counting instances. Experiments with weighted model counters built using our reduction indicate that these counters performs much better than a stateoftheart weighted model counter.

[c9] 
On Parallel Scalable Uniform SAT Witness Generator
Proceedings of Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 2015.
Constrainedrandom verification (CRV) is widely used in industry for validating hardware designs. The effectiveness of CRV depends on the uniformity of test stimuli generated from a given set of constraints. Most existing techniques sacrifice either uniformity or scalability when generating stimuli. While recent work based on random hash functions has shown that it is possible to generate almost uniform stimuli from constraints with 100,000+ variables, the performance still falls short of today's industrial requirements. In this paper, we focus on pushing the performance frontier of uniform stimulus generation further. We present a random hashingbased, easily parallelizable algorithm, UniGen2, for sampling solutions of propositional constraints. UniGen2 provides strong and relevant theoretical guarantees in the context of CRV, while also offering significantly improved performance compared to existing almostuniform generators. Experiments on a diverse set of benchmarks show that UniGen2 achieves an average speedup of about 20X over a stateoftheart sampling algorithm, even when running on a single core. Moreover, experiments with multiple cores show that UniGen2 achieves a nearlinear speedup in the number of cores, thereby boosting performance even further.

2014  
[c8] 
ADHA: Automatic Data layout framework for Heterogeneous Architectures
Proceedings of Parallel Architecture and Compilation Techniques (PACT), 2014.
Data layouts play a crucial role in determining the performance of a given application running on a given architecture. Existing parallel programming frameworks for both multicore and heterogeneous systems leave the onus of selecting a data layout to the programmer. Therefore, shifting the burden of data layout selection to optimizing compilers can greatly enhance programmer productivity and application performance. In this work, we introduce ADHA: a twolevel hierarchal formulation of the data layout problem for modern heterogeneous architectures. We have created a reference implementation of ADHA in the Heterogeneous HabaneroC (H2C) parallel programming system. ADHA shows significant performance benefits of up to 6.92X compared to manually specified layouts for two benchmark programs running on a CPU+GPU heterogeneous platform.

[c7] 
DistributionAware Sampling and Weighted Model Counting for SAT
Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2014.
Given a CNF formula and a weight for each assignment of values to variables, two natural problems are weighted model counting and distributionaware sampling of satisfying assignments. Both problems have a wide variety of important applications. Due to the inherent complexity of the exact versions of the problems, interest has focused on solving them approximately. Prior work in this area scaled only to small problems in practice, or failed to provide strong theoretical guarantees, or employed a computationallyexpensive maximum a posteriori probability (MAP) oracle that assumes prior knowledge of a factored representation of the weight distribution. We present a novel approach that works with a blackbox oracle for weights of assignments and requires only an {\NP}oracle (in practice, a SATsolver) to solve both the counting and sampling problems. Our approach works under mild assumptions on the distribution of weights of satisfying assignments, provides strong theoretical guarantees, and scales to problems involving several thousand variables. We also show that the assumptions can be significantly relaxed while improving computational efficiency if a factored representation of the weights is known.

[c6] 
Balancing Scalability and Uniformity in SATWitness Generator
Proceedings of Design Automation Conference (DAC), 2014.
Constrainedrandom simulation is the predominant approach used in the industry for functional verification of complex digital designs. The effectiveness of this approach depends on two key factors: the quality of constraints used to generate test vectors, and the randomness of solutions generated from a given set of constraints. In this paper, we focus on the second problem, and present an algorithm that significantly improves the stateoftheart of (almost)uniform generation of solutions of large Boolean constraints. Our algorithm provides strong theoretical guarantees on the uniformity of generated solutions and scales to problems involving hundreds of thousands of variables.

2013  
[c5] 
A Scalable Approximate Model Counter
Proceedings of International Conference on Constraint Programming (CP), 2013.
Propositional model counting (#SAT), i.e., counting the number of satisfying assignments of a propositional formula, is a problem of significant theoretical and practical interest. Due to the inherent complexity of the problem, approximate model counting, which counts the number of satisfying assignments to within given tolerance and confi dence level, was proposed as a practical alternative to exact model counting. Yet, approximate model counting has been studied essentially only theoretically. The only reported implementation of approximate model counting, due to Karp and Luby, worked only for DNF formulas. A few existing tools for CNF formulas are bounding model counters; they can handle realistic problem sizes, but fall short of providing counts within given tolerance and confidence, and, thus, are not approximate model counters. We present here a novel algorithm, as well as a reference implementation, that is the first scalable approximate model counter for CNF formulas. The algorithm works by issuing a polynomial number of calls to a SAT solver. Our tool, ApproxMC, scales to formulas with tens of thousands of variables. Careful experimental comparisons show that ApproxMC reports, with high confidence, bounds that are close to the exact count, and also succeeds in reporting bounds with small tolerance and high confidence in cases that are too large for computing exact model counts.

[c4] 
A Scalable and Nearly Uniform Generator of SAT Witnesses
Proceedings of International Conference on ComputerAided Verification (CAV), 2013.
Functional verification constitutes one of the most challenging tasks in the development of modern hardware systems, and simulationbased verification techniques dominate the functional verification landscape. A dominant paradigm in simulationbased verification is directed random testing, where a model of the system is simulated with a set of random test stimuli that are uniformly or nearuniformly distributed over the space of all stimuli satisfying a given set of constraints. Uniform or nearuniform generation of solutions for large constraint sets is therefore a problem of theoretical and practical interest. For Boolean constraints, prior work offered heuristic approaches with no guarantee of performance, and theoretical approaches with proven guarantees, but poor performance in practice. We offer here a new approach with theoretical performance guarantees and demonstrate its practical utility on large constraint sets.

2018  
[j3] 
Network Reliability Estimation in Theory and Practice
Submitted to Reliability Engineering and System Safety, 2018.
As engineered systems expand, become more interdependent, and operate in realtime, reliability assessment is indispensable to support investment and decision making. However, network reliability problems are known to be #Pcomplete, a computational complexity class largely believed to be intractable. The computational intractability of network reliability motivates our quest for reliable approximations. Based on their theoretical foundations, available methods can be grouped as follows: (i) exact or bounds, (ii) guaranteeless sampling, and (iii) probably approximately correct (PAC). Group (i) is well regarded due to its useful byproducts, but it does not scale in practice. Group (ii) scales well and verifies desirable properties, such as the bounded relative error, but it lacks error guarantees. Group (iii) is of great interest when precision and scalability are required, as it harbors computationally feasible approximation schemes with PACguarantees. We give a comprehensive review of classical methods before introducing modern techniques and our developments. We introduce KRelNet, an extended countingbased estimation method that delivers PACguarantees for the Kterminal reliability problem. Then, we test methods' performance using various benchmark systems. We highlight the range of application of algorithms and provide the foundation for future resilience engineering as it increasingly necessitates methods for uncertainty quantification in complex systems.

2016  
[j2] 
On Computing Minimal Independent Support and Its Applications to Sampling and Counting
Constraints 21(1), 2016.
Constrained sampling and counting are two fundamental problems arising in domains ranging from artificial intelligence and security, to hardware and software testing. Recent approaches to approximate solutions for these problems rely on employing SAT solvers and universal hash functions that are typically encoded as XOR constraints of length n/2 for an input formula with n variables. As the runtime performance of SAT solvers heavily depends on the length of XOR constraints, recent research effort has been focused on reduction of length of XOR constraints. Consequently, a notion of Independent Support was proposed, and it was shown that constructing XORs over independent support (if known) can lead to a significant reduction in the length of XOR constraints without losing the theoretical guarantees of sampling and counting algorithms. In this paper, we present the first algorithmic procedure (and a corresponding tool, called MIS) to determine minimal independent support for a given CNF formula by employing a reduction to group minimal unsatisfiable subsets (GMUS). By utilizing minimal independent supports computed by MIS, we provide new tighter bounds on the length of XOR constraints for constrained counting and sampling. Furthermore, the universal hash functions constructed from independent supports computed by MIS provide two to three orders of magnitude performance improvement in stateoftheart constrained sampling and counting tools, while still retaining theoretical guarantees.

2016  
[w1] 
Constrained Sampling and Counting: Universal Hashing meets SAT Solving
Proceedings of Workshop on Beyond NP(BNP), 2016.
Constrained sampling and counting are two fundamental problems in artificial intelligence with a diverse range of applications, spanning probabilistic reasoning and planning to constrainedrandom verification. While the theory of these problems was thoroughly investigated in the 1980s, prior work either did not scale to industrial size instances or gave up correctness guarantees to achieve scalability. Recently, we proposed a novel approach that combines universal hashing and SAT solving and scales to formulas with hundreds of thousands of variables without giving up correctness guarantees. This paper provides an overview of the key ingredients of the approach and discusses challenges that need to be overcome to handle larger realworld instances.
