Intent Languages: CSP Formulation

On classifying tractable CSPs

The dichotomy theorem

In my conversation with @degregat, we discussed classifying CSPs by complexity. In his post, he points to the paper;

While it was a good paper when it was released, that was in 2006, and the field has advanced considerably since then. In 2017, the full dichotomy theorem for finite-domain CPS was proven in

I don’t recommend reading either of those papers; I’ll give a better follow-up article in a bit. I will summarize the result though. It’s about CSP languages. We can describe these as a kind of algebra. Every CSP language contains a finite set of N-ary relations. It is closed under relational composition, that is, under conjunction of relations that may share variables. Such an algebra is called a “relational clone”.

Each CSP language has a corresponding algebra of “polymorphisms”. A polymorphism of a CSP is a function that preserves all relations. That is, given a relation, R(X, Y), in the CSP, a function, f, conserves R if R(a, b) implies R(f(a), f(b)). If f satisfies the analog of this for all n-ary relations within the CSP, it’s a polymorphism.

This has a connection to programming language theory through Reynold’s theory of relational parametricity. There, polymorphism (in the programming sense), is defined through a function preserving all relations. This is the same concept, but in a higher-order, infinite domain, while we’re dealing with a first-order, finite domain. At first order, the only parametrically polymorphic functions are the projection functions (including identity as a special case). As such, projections are always polymorphisms for any CSP. Additionally, with some thought, we can conclude that, if f and g preserve a relation, then so does their composition. Beyond this, I don’t think anyone’s studied this connection with any seriousness.

This gets us an algebra of functions that includes all the projection functions and is closed under composition. This is called a “functional clone”. Incidentally, such a thing is usually just called a “clone”, and such function algebras are the domain of clone theory (see, e.g., “Function algebras on finite sets: A basic course on many-valued logic and clone theory”). Relational clones don’t seem widely studied.

Modern approaches to CSP tractability focus on the structure of this polymorphism algebra. The low-resolution version of the dichotomy theorem states that

Any finite-domain CSP is either NP-hard or all queries are P-time solvable

A higher-resolution version of this theorem would state

Any finite-domain CSP whose polymorphism algebra is essentially a clone of projections (in other words, whose poly. alg. is trivial) is NP-hard, otherwise, all queries are P-time solvable

Note that I’m hiding some technical details by using the word “essentially”; something can be “essentially just a bunch of projections” without literally being that, see the unification paper I will soon cite for details.

This has some significant consequences in that it shows there are no finite-domain CSPs that are between P and NP-hard; there are no quasipolynomial time CSPs, etc. Also, tractability can be framed by studying the algebra of polymorphisms. All the dichotomy results in the original survey article are special cases of this.

For a survey of the general approach, see

These, unfortunately, do not go into the proof of the dichotomy theorem. The most cogent presentation is,

The original proofs used different, hard-to-grok methods; the goal of that paper is to unify them under a single framework through the notion of a Minimal Taylor Algebra. A Taylor Algebra, A, is an algebra such that no quotient of a subalgebra of A is a two-element algebra whose every operation is a projection. It’s a way of defining a nontrivial clone in full technicality.

Beyond CSP

There have been many directions of work trying to apply the so-called “Algebraic Approach” to CSPs to other domains.

The most active is promise CSPs. These are problems where we assume we have a solution to a problem in a different CSP language and ask if this can help us solve problems in a different CSP faster. This doesn’t seem so relevant to our goals, but it should be looked at in more detail to be sure.

A different domain that’s probably more relevant is quantified CSP. This allows universal, and further, alternating quantifiers as part of a query. This can be seen as a sort of function-synthesis problem through the notion of Skolemization. It’s known that full quantified boolean relations are PSpace complete. There was a trichotomy conjecture asserting that all finite domain QCSP problems are either PSpace-hard, NP-hard, or P-time tractable, but this has since been disproven. See

Full classification of QCSPs is still an open problem. Some large classes are classified, see for example

There is also some work on infinite domain CSPs. This is a bit vague, so additional restrictions are added. Common ones are being numerical domain relations, being omega-categorical, or being equipped with an oligomorphic permutation group.

Another relevant area is parameterized complexity of CSPs. This seems understudied, but the idea is to loosen the notion of polymorphism to that of a partial polymorphism that preserves some but not all relations of the CSP. The details of this can allow one to classify queries by difficulty, allowing a worst-case untractable CSP to have a large collection of tractable queries. See:

This area, as a whole, seems underdeveloped.

Representing CSPs

CSPs themselves are often described in highly linguistic terms, mentioning statements, variables, etc. This is not necessarily the best thing for formalization as dealing with nominal elements like variable naming, if unnecessary, would distract from the point. There are two main approaches to formulating CSPs in a neat way.

  1. Graph homomorphism: Given a finite domain CSP, we can describe a query through a labeled hypergraph. For example,
    R(X, Y), S(Z, X, W), Q(Y, Z, W)
    can be turned into a graph with three (unlabeled) nodes, one for X, Y, and Z, along with directed (hyper)edges between X and Y (labeled “R”), Z, X, and W (labeled “S”), and Y, Z, and W, (labeled “Q”). We also have the domain, which we can imagine as a graph whose nodes are all the terms within the finite domain connected by directed hyper-edges labeled by atomic relations. A solution to the query is then an edge-label-preserving homomorphism from the query graph onto the model graph. This may be unwieldy if done directly, but some version of this idea may be fine if we don’t directly encode the model graph in its entirety. This shows that the CSP problem is equivalent to the (colored-edge hyper)graph homomorphism problem.

  2. Point-free relation algebra: This essentially would involve defining a small set of point-free combinators capable of combining into any query. There are point-free approaches to relation algebra made for the logic programming context, see, for example:

1 may be better for creating a small circuit, while 2 would probably be better for a programmer to use.

Is CSP right in the first place?

It seems to me that most of the time, we won’t be able to satisfy all intents. As a result, any CSP algorithm would just give a negative result. What we want to do is satisfy the maximum number of constraints.

This is the MaxCSP problem.

Robust satisfiability

Robust satisfiability is an attempt to characterize problems where, if a problem has an almost satisfying assignment, then such an assignment can always be found. Note that this doesn’t necessarily cover MaxCSP in its entirety; a MaxCSP problem could still be tractable in practice even if it isn’t robustly satisfiable. This would mean that some instances are easily approximable, but, in general, we couldn’t do better than guessing. This doesn’t say that we can’t efficiently maximize satisfied clauses either on average or efficiently with respect to some parameter other than problem size.

The most important paper is

This proves that any bounded width CSP is robustly satisfiable. Bounded width, in this case, means that the CSP cannot encode linear equations over abelian groups.

Further, according to

This paper

proves that any efficiently robustly satisfiable CSP must have bounded width, though I can’t actually find this in the paper. Instead, the paper states something more specific; that linear equations over an Abelian group G where each equation contains precisely 3 variables cannot be efficiently robustly satisfied, and that first paper interprets that result more broadly.

Assuming all that’s correct, being of bounded width is the only condition that must be met to efficiently and robustly maximize assignments, and one cannot do better.

TODO: Other research? MaxSAT?

Important Note: Checking vs solving

I was given the task to formulate a logic for a tractable CSP in Vamp-IR or Juvix. It's worth noting that this isn't really necessary. Verification is done after a solution is found, and that's always doable efficiently, regardless of how difficult finding the solution was in the first place. A good example to keep in mind are diophantine equations (systems of polynomials with *integer* solutions). That domain is known to be undecidable, certainly harder than any NP-hard problem, but if you have a specific solution, then checking that solution is as simple as plugging it in and evaluating the system. What this means is that the language of intents does not need to match the language the verifier is using; the latter could be a simpler NP-complete language that we can embed the more complicated, restricted language in.

Descriptive Complexity

It does seem to me that the discussion thus far has been backward. We’ve looked at CSPs that are tractable, but why not ask, instead, for a logic which is simply complete for all tractable CSPs?

The immediate question is whether something like this is possible. This problem is “Gurevich’s Problem”, and is, in its most general form, still open.

It’s also been called “the most important problem of Finite Model Theory” and “the central open problem in descriptive complexity theory”.

However, there are many positive results.

In a totally ordered domain, two main logics are known to capture exactly, and only poly-time queries

  1. First-order logic extended with a Least Fixed Point operator for defining new relations captures exactly the P-time executable queries.

THEOREM 1. Let L be a query language consisting of relational calculus plus the least fixed point operator, ~. Suppose that L contains a relation symbol for a total ordering relation on the domain (e.g., lexicographic ordering). Then the queries expressible in L are exactly the queries computable in polynomial time.

  1. It was proved that “the restriction of second-order logic to formulae whose first order part is a universal Horn formula” captures all P-time queries over ordered structures.
  • “The expressive power of second-order Horn logic” by Erich Grädel

This covers booleans as a special case. If we are willing to translate everything into a boolean domain, then these logics capture all the polytime queries expressible after said translation.

Also, note that least-fixed-point logic is essentially just bare Datalog.

An example of a poly-time query not expressible in this setting is the parity of an unordered set (determining if the number of elements in a set is even or odd).

The limitation is that an intrinsic ordering isn’t present on arbitrary graphs. If the graph is canonicalized, then this canonicalization implies an intrinsic ordering. The existence of a polynomial time graph canonization algorithm is still open; it is known to be doable in quasi-polynomial time. See;

The goal is to create a query language whose expressive power is independent of how the model graph is laid out in memory. The only major candidates are a logic called “Choiceless Polynomial Time”, and extensions of it. Specifically, the variant with the ability to count is the minimal, still possible candidate.

CPT with counting is essentially the weakest logic such that all queries are in PTime and there are no known PTime queries that are not expressible in it.

There is much ongoing research trying to separate this logic from P. There are also attempts to minimally extend CPT so that its canonization invariance is more explicit. The most promising is

OSL

This reminded me of a project that I previously worked on called OSL. GitHub - Polytopoi/osl: A spec language for zero knowledge proofs

It’s a kind of finite-domain second-order logic specifically designed for defining predicates for zk applications. I’ve been asked about it before, specifically in comparison to Vamp-IR. It really isn’t like Vamp-IR, but it might be more relevant here.

Category Theory

There is some work on formulating ideas within this domain in a generic, categorical framework. This descends from Abramsky’s work on Arboreal categories and the pebbling comonad.

See also:

Solution Methods

My default suggestion would be to use an out-of-the-box solver. Converting to a domain supported by Z3 and using that is not a bad choice. There are other options. One could instead use an ASP system like Clingo. There are, of course, other solvers. If we use MaxSAT or some kind of integer programming framework, such as Gurobi, that would imply different encodings.

TODO: Get a list of reasonable options; Dedicated SAT solvers, ASP systems, Integer Programming frameworks, Other constraint solvers?

@degregat wondered about overhead when encoding a CSP into a sat solver. There are simple naive encodings that would have linear overhead. Much effort spent on encodings deals with concerns like propagation efficiency; that’s both a difficult problem in general and can inflate the size of encodings. What we’re doing seems similar to the kinds of programs an ASP system would have; it may be worth looking into how a system like clingo translates things into its sat solver.

Problem Compression

It’s possible to transform a problem into a smaller problem from which we can extract a solution to the original problem. This is probably worth looking into;

Encodings

TODO: Both SAT encodings and ILP encodings

ODESAT

@cwgoes mentioned an SAT solver I wrote and wondered about its applicability: GitHub - AHartNtkn/odesat: A sat solver that simulates ODEs to solve SAT problems.

In it, a CNF formula is translated into an ODE where every boolean variable is mapped to a continuous variable, and every clause is granted two variables, one representing a kind of short-term memory, and the other representing a kind of long-term memory. The precise way that the derivatives for these variables are defined guarantees that the vector field of boolean variables is point-dissipative; that is, any volume will always decrease in size. This prevents things like cycles or chaos; all trajectories will end up in a stable equilibrium well eventually. Additionally, these wells are exactly the solution to the original SAT problem.

This idea comes from the paper:

There is also an integer programming version of the idea;

According to the creators of the framework, this approach tends to scale more robustly irrespective of the problem structure in comparison to traditional SAT solvers. I can’t verify this, but I can verify that it’s at least effective as a technique. Conceptually, the technique seems like a continuous, fully parallel variant of Walk-SAT, but less naive than the obvious translation.

It can solve hard SAT problems, but it tends to be slower than traditional methods, at least due to the overhead of simulating a dynamical system. The current version of odesat doesn’t implement parallelism. I wanted to make a GPU version using Arrayfire, but I couldn’t get it installed so I gave up, but such would be necessary to get maximum efficiency. As it is, it’s likely 100x-1000x slower than it would otherwise be if I could run it on a GPU.

This approach may actually be well suited for max-sat as it can continuously try to maximize the number of satisfying clauses and it will keep trying to do better as long as it runs. My understanding is that it’s guaranteed to find the actual maximizing assignment eventually, but such problems don’t actually have any stable equilibria, so it would be part of a cycle of near-maximal assignments that it would eventually settle into.