## We Say Branches And You Say Choices

* You like tomato and I like tomahto *

Oded Green, Marat Dukhan, and Richard Vuduc are researchers at Georgia Institute of Technology—my home institution. They recently presented a paper at the Federated Conference titled, “Branch-Avoiding Graph Algorithms.”

Today Ken and I would like to discuss their interesting paper, and connect it to quite deep work that arises in computational logic.

As a co-inventor of the Federated type meeting—see this for the story—I have curiously only gone to about half of them. One of the goals of these meetings is to get diverse researchers to talk to each other. One of the obstacles is that the language is often different. Researchers often call the same abstract concept by different names. One of my favorite examples was between “breadth-first search” and “garbage collection”—see the story here.

Another deeper reason is that they may be studying concepts that are related but not identical. We will study such an example today. It connects logicians with computer architects.

One group calls the algorithmic concept *choicelessness* and the other calls it *reduced branching*. This reminds us of the old song “Let’s Call the Whole Thing Off” by George and Ira Gershwin:

The song is most famous for its “You like to-may-toes and I like to-mah-toes” and other verses comparing their different regional dialects.

Theorists have been studying various restrictions on polynomial time algorithms for years. What is interesting—cool—is that recently some of these ideas have become important in practical algorithms. The goal is to explain the main ideas here.

## Labels and Choices

A binary string is naturally an ordered structure. We could present via a number standing for and a list of such that . We could give that list in any order, but we generally can’t permute the labels —doing so would change the string. We avoid fussing with this and just give the bits of the string in the canonical order.

When the input is a *graph* , however, there are many possible orders in which the vertices can be labeled and the edges presented. Usually we give a number standing for and a list of edges, in some selected order. The point is that properties of the graph do not depend on the labeling. We would also like the output of algorithms not to depend on the order edges are presented. The relevant question is,

To what extent do the output and execution pattern of depend on the labeling used for the graph and the order of presenting edges?

Ideally there would be no dependence. However, consider a simple sequential breadth-first search (BFS) algorithm that maintains a list of not-yet-expanded members of the set of nodes reached from the start vertex . Whenever it must choose a next to expand. The choice is arbitrary unless determined by the labeling ( can be the least member of ) or by the presentation order of neighbors of some previous vertex that were enqueued. An ordering might be “lucky” for some graphs if it minimizes the number of times a neighbor is generated that already belongs to , and foreknowledge of and of being connected can help the algorithm know when to stop.

An example of giving different output—one that strikes us as a harder example—is the -time algorithm for finding a vertex cover of size . It cycles through sequences . For each one, it takes the edges in order, and if neither nor belongs to , adds to if the next bit of is , adding to otherwise. Not just the edge presentation but also the ordering used to distinguish from affects the final output —for reasons apart from the ordering of .

Can we reduce or eliminate the conditional branching that depends on the orderings? That is the question we see as common ground between the two concepts.

## Choicelessness

The search for ways to eliminate the dependence on the linear ordering led logicians, years ago, to the notion of PTime logics. In particular they asked if it is possible to create a model of polynomial time that does not depend on the ordering of the input. It was conjectured that such a model does not exist.

For example the work of Andreas Blass, Yuri Gurevich, and Saharon Shelah studies an algorithmic model and accompanying logic called CPT for “Choiceless Polynomial Time.” We’ll elide details of their underlying “Abstract State Machine” (ASM) model which comes from earlier papers by Gurevich, but simply note that the idea is to replace arbitrary choices that usual algorithms make with parallel execution. The key restriction is that the algorithms must observe a polynomial limit on the number of code objects that can be executing at any one time. Here is how this plays out in their illustration of BFS:

The algorithm successively generates the “levels” of nodes at distances from . In case is a neighbor of more than one , their ASM avoids having a separate branch for each edge and so avoids an exponential explosion. There are no more code objects than vertices active at any time, so the polynomial restriction is observed.

Not all polynomial-time decidable properties of graphs are so readily amenable to such algorithms, and indeed they prove that having a perfect matching is not expressible in their accompanying CPT logic even when augmented with knowledge of . At the time they pointed out that:

The resulting logic expresses all properties expressible in any other PTime logic in the literature.

A different extension by Anuj Dawar employing linear algebra is not known to be comparable with CPT according to slides from a talk by Wied Pakusa on his joint paper with Faried Abu Zaid, Erich Grädel, and Martin Grohe. That paper extends the following result by Shelah to more-general structures with two “color classes” than bipartite graphs:

Theorem 1

Bipartite matching is decidable in CPT+Counting.

There are many classes of graphs such that restricted to graphs in that class, the power of full PTime logic with order is no greater than CPT over that class. These include graphs of bounded treewidth and graphs with excluded minors, along lines of work by Grohe that we covered some time back. For other classes the power of CPT is open.

## Branches

We sense a possible and surprising connection between this beautiful foundational work of Blass, Gurevich, and Shelah to an extremely important practical program.

When a modern computer computes and makes choices this causes it generally to lose some performance. The reason is that modern computers make predictions about choices: they call it branch prediction. If the computer can correctly predict the branch taken, the choice made, then the computation runs faster. If it fails to make the right prediction it runs slower.

Naturally much work has gone into how to make branch predictions correctly. In their recent paper, Green, Dukhan, and Vuduc study a radical way to improve prediction: make fewer branches. Trivially if there are fewer choices made, fewer branches to predict, they should be able to increase how often they are right. They consider two classic algorithms for computing certain graph problems: connected components and breadth-first search (BFS). Their main result is that by rewriting the algorithms to make fewer choices they can improve the performance by as much as 30%-50%. This suggests that one should seek graph algorithms and implementations that avoid as many branches as possible.

They add:

As a proof-of-concept, we devise such implementations for both the classic top-down algorithm for BFS and the Shiloach-Vishkin algorithm for connected components. We evaluate these implementations on current x86 and ARM- based processors to show the efficacy of the approach. Our results suggest how both compiler writers and architects might exploit this insight to improve graph processing systems more broadly and create better systems for such problems.

Here are the old-and-new algorithms for BFS in their paper:

Our speculative question is, can the latter algorithm be inferred from the CPT formalism of the logicians? How much “jiggery-pokery” of the previous illustration from the logicians’ paper would it take, or are we talking “applesauce”? More broadly, what more can be done to draw connections between “Theory A” and “Theory B” as discussed in comments to Moshe Vardi’s post here?

## Open Problems

Are choices and branches really connected as we claim? Can we replace the search for algorithms that make no choices with algorithms that reduce choices, branches? In a sense can we make choices a resource like time and space. And not try to make it zero like choice free algorithms, but rather just reduce the number of choices. This may only lead to modest gains in performance, but in today’s world where speed of processors is not growing like in the past, perhaps this is a very interesting question.

This is post number 128 under our joint handle “Pip” by one WordPress count, though other counts say 127 or 129. As we wrote in that linked post: Sometimes we will use Pip to ask questions in a childlike manner, mindful that others may have reached definite answers already. We have thoroughly enjoyed the partnership and look forward to reaching the next power of 2.

## Fathering Randomized Fault Tolerance

* Plus visiting Michael Rabin and talking about Gödel’s Theorems *

Michael Ben-Or and Michael Rabin have won the 2015 Dijkstra Prize for Distributed Computing. The citation says,

In [two] seminal papers, published in close succession in 1983, Michael Ben-Or and Michael O. Rabin started the field of fault-tolerant randomized distributed algorithms.

Today Ken and I wish to congratulate both Michaels for the well deserved recognition for their brilliant work.

Read more…

## Security Via Surrender

* A new approach to protecting data and identity *

Cropped from src1, src2 |

David Sanger and Julie Davis are reporters for the paper of record—the New York Times. Their recent article starts:

WASHINGTON—The Obama administration on Thursday announced what appeared to be one of the largest breaches of federal employees’ data, involving at least four million current and former government workers in an intrusion that officials said apparently originated in China.

The compromised data was held by the Office of Personnel Management, which handles government security clearances and federal employee records. The breach was first detected in April, the office said, but it appears to have begun at least late last year.

The target appeared to be Social Security numbers and other “personal identifying information,” but it was unclear whether the attack was related to commercial gain or espionage. …

Today Ken and I want to suggest a new approach to data breaches like this.

Read more…

## Minor Insights Are Useful

* Some examples of small insights that help *

Cropped from src1, src2 |

Julia Chuzhoy and Chandra Chekuri are experts on approximation algorithms: both upper and lower bounds. Each is also interested in graph theory as it applies to algorithms.

Today Ken and I wish to talk about their recent papers on structural theorems for graphs.

Read more…

## Puzzling Evidence

* Exponential hardness connects broadly to quadratic time *

Cropped from src1, src2 |

Arturs Backurs and Piotr Indyk are student and advisor. The latter, Piotr, is one of the leading algorithms and complexity theorists in the world—what an honor it must be for Arturs to work with him as his advisor.

Today Ken and I want to talk about their paper on edit distance and an aspect that we find puzzling.

Read more…

## John and Alicia Nash, 1928,1933–2015

* Our condolences *

Awesome Stories source |

John Nash and his wife Alicia were killed in a taxi accident on the New Jersey Turnpike Saturday afternoon. They were on their way back from Norway where he and Louis Nirenberg had just accepted the 2015 Abel Prize. Besides meeting the king of Norway, Nash had also expressed a desire to meet world chess champion Magnus Carlsen during remarks at Princeton’s celebration of his Abel Prize in March, and that was also granted this past week.

Today Dick and I join many expressing shock and offering condolences.

Read more…

## The Shapes of Computations

* Or rather, what can the shapes of proofs tell us about them? *

April CACM source |

Juris Hartmanis did much to lay the landscape of computational complexity beginning in the 1960s. His seminal paper with Richard Stearns, “On the Computational Complexity of Algorithms,” was published 50 years ago this month, as observed by Lance Fortnow in his blog with Bill Gasarch. It is a great achievement to open a new world, but all the more mysterious that after 50 years so much of its landscape remains unknown.

Today we ask what might determine the unseen topography and how much some recent large-data discoveries may help to map it.

Read more…