Human-centered redistricting automation in the age of AI – Science Magazine

Redistrictingthe constitutionally mandated, decennial redrawing of electoral district boundariescan distort representative democracy. An adept map drawer can elicit a wide range of election outcomes just by regrouping voters (see the figure). When there are thousands of precincts, the number of possible partitions is astronomical, giving rise to enormous potential manipulation. Recent technological advances have enabled new computational redistricting algorithms, deployable on supercomputers, that can explore trillions of possible electoral maps without human intervention. This leaves us to wonder if Supreme Court Justice Elena Kagan was prescient when she lamented, (t)he 2010 redistricting cycle produced some of the worst partisan gerrymanders on record. The technology will only get better, so the 2020 cycle will only get worse (Gill v. Whitford). Given the irresistible urge of biased politicians to use computers to draw gerrymanders and the capability of computers to autonomously produce maps, perhaps we should just let the machines take over. The North Carolina Senate recently moved in this direction when it used a state lottery machine to choose from among 1000 computer-drawn maps. However, improving the process and, more importantly, the outcomes results not from developing technology but from our ability to understand its potential and to manage its (mis)use.

It has taken many years to develop the computing hardware, derive the theoretical basis, and implement the algorithms that automate map creation (both generating enormous numbers of maps and uniformly sampling them) (14). Yet these innovations have been easy compared with the very difficult problem of ensuring fair political representation for a richly diverse society. Redistricting is a complex sociopolitical issue for which the role of science and the advances in computing are nonobvious. Accordingly, we must not allow a fascination with technological methods to obscure a fundamental truth: The most important decisions in devising an electoral map are grounded in philosophical or political judgments about which the technology is irrelevant. It is nonsensical to completely transform a debate over philosophical values into a mathematical exercise.

As technology advances, computers are able to digest progressively larger quantities of data per time unit. Yet more computation is not equivalent to more fairness. More computation fuels an increased capacity for identifying patterns within data. But more computation has no relationship with the moral and ethical standards of an evolving and developing society. Neither computation nor even an equitable process guarantees a fair outcome.

The way forward is for people to work collaboratively with machines to produce results not otherwise possible. To do this, we must capitalize on the strengths and minimize the weaknesses of both artificial intelligence (AI) and human intelligence. Ensuring representational fairness requires metacognition that integrates creative and benevolent compromises. Humans have the advantage over machines in metacognition. Machines have the advantage in producing large numbers of rote computations. Although machines produce information, humans must infuse values to make judgments about how this information should be used (5).

Markedly different outcomes can emerge when six Republicans and six Democrats in these 12 geographic units are grouped into four districts. A 50-50 party split can be turned into a 3:1 advantage for either party. When redistricting a state with thousands of precincts, the potential for political manipulation is enormous.

Accordingly, machines can be tasked with the menial aspects of cognitionthe meticulous exploration of the astronomical number of ways in which a state can be partitioned. This helps us classify and understand the range of possibilities and the interplay of competing interests. Machines enhance and inform intelligent decision-making by helping us navigate the unfathomably large and complex informational landscape. Left to their own devices, humans have shown themselves to be unable to resist the temptation to chart biased paths through that terrain.

The ideal redistricting process begins with humans articulating the initial criteria for the construction of a fair electoral map (e.g., population equality, compactness measures, constraints on breaking political subdivisions, and representation thresholds). Here, the concerns of many different communities of interest should be solicited and considered. Note that this starting point already requires critical human interaction and considerable deliberation. Determining what data to use, and how, is not automatable (e.g., citizen voting age versus voting age population, relevant past elections, and how to forecast future vote choices). Partisan measures (e.g., mean-median difference, competitiveness, likely seat outcome, and efficiency gap) as well as vote prediction models, which are often contentious in court, should be transparently specified.

Once we have settled on the inputs to the algorithm, the computational analysis produces a large sample of redistricting plans that satisfy these principles. Trade-offs usually arise (e.g., adhering to compactness rules might require splitting jagged cities). Humans must make value-laden judgments about these trade-offs, often through contentious debate.

The process would then iterate. After some contemplation, we may decide, perhaps, on two, not three, majority-minority districts so that a particular town is kept together. These refined goals could then be specified for another computational analysis round with further deliberation to follow. Sometimes a Pareto improvement principle applies, with the algorithm assigned to ascertain whether, for example, city splits or minority representation can be maintained or improved even as one raises the overall level of compliance with other factors such as compactness. In such a process, computers assist by clarifying the feasibility of various trade-offs, but they do not supplant the human value judgments that are necessary for adjusting these plans to make them humanly rational. Neglecting the essential human role is to substitute machine irrationality for human bias.

Automation in redistricting is not a substitute for human intelligence and effort; its role is to augment human capabilities by regulating nefarious intent with increased transparency, and by bolstering productivity by efficiently parsing and synthesizing data to improve the informational basis for human decision-making. Redistricting automation does not replace human labor; it improves it. The critical goal for AI in governance is to design successful processes for human-machine collaboration. This process must inhibit the ill effects from sole reliance on humans as well as overreliance on machines. Human-machine collaboration is key, and transparency is essential.

The most promising institutional route in the near term for adopting this human-machine line-drawing process is through independent redistricting commissions (IRCs) that replace politicians with a balanced set of partisan citizen commissioners. IRCs are a relatively new concept and exist in only some states. They have varied designs. In eight states, a commission has primary responsibility for drawing the congressional plan. In six, they are only advisory to the legislature. In two states, they have no role unless the legislature fails to enact a plan. IRCs also vary in the number of commissioners, partisan affiliation, how the pool of applicants is created, and who selects the final members.

The lack of a blueprint for an IRC allows each to set its own rules, paving the way for new approaches. Although no best practices have yet emerged for these new institutions, we can glean some lessons from past efforts about how to integrate technology into a partisan balanced deliberation process. For example, Mexico's process integrated algorithms but struggled with transparency, and the North Carolina Senate relied heavily on a randomness component. Both offer lessons and help us refine our understanding of how to keep bias from creeping into the process.

Once these structural decisions are made, we must still contend with the fact that devising electoral maps is an intricate process, and IRCs generally lack the expertise that politicians and their staffs have cultivated from decades of experience. In addition, as the bitter partisanship of the 2011 Arizona citizen commission demonstrated, without a method to assess the fairness of proposals, IRCs can easily deadlock or devolve into lengthy litigation battles (6). New technological tools can aid IRCs in fulfilling their mandate by compensating for this experience deficiency as well as providing a way to benchmark fairness conceptualizations.

To maintain public confidence in their processes, IRCs would need to specify the criteria that guide the computational algorithm and implement the iterative process in a transparent manner. Open deliberation is crucial. For instance, once the range of maps is known to produce, say, a seven-to-eight likely split in Democrat-to-Republican seats 35% of the time, an eight-to-seven likely Democrat-to-Republican split 40% of the time, and something outside these two choices 25% of the time, how does an IRC choose between these partisan splits? Do they favor a split that produces more compact districts? How do they weigh the interests of racial minorities versus partisan considerations?

Regardless of what technology may be developed, in many states, the majority party of the state legislature assumes the primary role in creating a redistricting planand with rare exceptions, enjoys wide latitude in constructing district lines. There is neither a requirement nor an incentive for these self-interested actors to consent to a new process or to relinquish any of their constitutionally granted control over redistricting.

All the same, technological innovation can still have benefits by ameliorating informational imbalance. Consider redistricting Ohio's 16 congressional seats. A computational analysis might reveal that, given some set of prearranged criteria (e.g., equal population across districts, compact shapes, a minority district, and keeping particular communities of interest together), the number of Republican congressional seats usually ends up being 9 out of 16, and almost never more than 11. Although the politicians could still then introduce a map with 12 Republican seats, they would now have to weigh the potential public backlash from presenting electoral districts that are believed, a priori, to be overtly and excessively partisan. In this way, the information that is made more broadly known through technological innovation induces a new pressure point on the system whereby reform might occur.

Although politicians might not welcome the changes that technology brings, they cannot prevent the ushering in of a new informational era. States are constitutionally granted the right to enact maps as they wish, but their processes in the emerging digital age are more easily monitored and assessed. Whereas before, politicians exploited an information advantage, scientific advances can decrease this disparity and subject the process to increased scrutiny.

Although science has the potential to loosen the grip that partisanship has held over the redistricting process, we must ensure that the science behind redistricting does not, itself, become partisanship's latest victim. Scientific research is never easy, but it is especially vulnerable in redistricting where the technical details are intricate and the outcomes are overtly political.

We must be wary of consecrating research aimed at promoting a particular outcome or believing that a scientist's credentials absolve partisan tendencies. In redistricting, it may seem obvious to some that the majority party has abused its power, but validating research that supports that conclusion because of a bias toward such a preconceived outcome would not improve societal governance. Instead, use of faulty scientific tests as a basis for invalidating electoral maps allows bad actors to later overturn good maps with the same faulty tests, ultimately destroying our ability to legally distinguish good from bad. Validating maps using partisan preferences under the guise of science is more dangerous than partisanship itself.

The courts must also contend with the inconvenient fact that although their judgments may rely on scientific research, scientific progress is necessarily and excruciatingly slow. This highlights a fundamental incompatibility between the precedential nature of the law and the unrelenting need for high-quality science to take time to ponder, digest, and deliberate. Because of the precedential nature of legal decision-making, enshrining underdeveloped ideas has harmful path-dependent effects. Hence, peer review by the relevant scientific community, although far from perfect, is clearly necessary. For redistricting, technical scientific communities as well as the social scientific and legal communities are all relevant and central, with none taking over the role of another.

The relationship of technology with the goals of democracy must not be underappreciatedor overappreciated. Technological progress can never be stopped, but we must carefully manage its impact so that it leads to improved societal outcomes. The indispensable ingredient for success will be how humans design and oversee the processes we use for managing technological innovation.

Acknowledgments: W.K.T.C. has been an expert witness for A. Philip Randolph Institute v. Householder, Agre et al. v. Wolf et al., and The League of Women Voters of Pennsylvania et al. v. The Commonwealth of Pennsylvania et al.

Read this article:

Human-centered redistricting automation in the age of AI - Science Magazine

Related Posts

Comments are closed.