October 18, 2019
3:00pm – 5:00pm
Precision and Uncertainty in a World of Data
The Departments of Anthropology and the History of Medicine and the Center for Medical Humanities and Social Medicine will be launching a two-year Sawyer Seminar on the topic of Precision and Uncertainty in a World of Data in the 2019-2020 academic year. This semester (Spring 2019), we are holding a series of reading group meetings to connect with faculty and students across the different divisions at Johns Hopkins University and beyond, in an effort to structure the seminar’s two-year course.
We envision for the Sawyer seminar to prompt conversations around what kinds of ethical and social issues are new about our Big Data moment, what has carried over from the past, and what kinds of approaches might help us extend our understanding of this moment’s specificity. On this page, we will be posting notes on our reading group meetings. If you have attended one of our meetings and would like to contribute to this page with your notes, please let us know! If you would like to attend our events, get on our mailing list to stay informed, or get in touch with us with any questions or comments, please send an email to Canay Özden-Schilling (ozden@jhu.edu).
Naveeda Khan
Veena Das
Jeremy Greene
Canay Özden-Schilling
2/13/2019: Automating Inequality: A discussion of Virginia Eubanks’s book
3/6/2019: Privacy and Data: A Discussion with Anita Allen (UPenn)
4/10/2019: Big Data and Resource Allocation: A Discussion with Sanmay Das (Washington University) (Washington University)
5/6/2019: Algorithms and Accountability: A Discussion with Juliet Floyd (Boston University) and Matthew Jones (Columbia University)
4/10/2019 | Big Data and Resource Allocation: A Discussion with Sanmay Das | “Data Sciences and Society” Reading Group
For this meeting, we read two articles by Sanmay Das, both of which tackled the issue of how to improve the efficiency of the allocation of homelessness interventions. Dr. Das is interested in the application of optimization onto social issues, from matching markets to social networks to finance. These two articles focusing on homelessness, a topic we discussed earlier in the semester after reading Automating Inequality, gave us a welcome opportunity to see what was inside the algorithms trusted to govern social life, so often black-boxed in social analyses.
At the start, Dr. Das offered a brief overview of the two papers but concentrated on describing the case study on Homelessness Services in a metropolitan area. In addition to describing the technical issues such as the way data were collated or the model of complexity developed and the features of the algorithms for alternatives in allocation of scarce resources there was a fair amount of discussion in his summary of the issues relating to ethics and fairness and the central role of what he called human interpretability. Canay Özden-Schilling provided a detailed comment on the papers followed by a lively discussion in which various questions were fielded by Das.
Canay’s comments:
What we see when we open the black box of resource allocation algorithms is a matching system. In this paper, Das and his collaborators (from here on only “Das” for simplicity) work with data acquired from the homelessness services of a major metropolitan area. The algorithm currently in effect matches the specific interventions that alleviate homelessness (in this case five interventions ranging from simple prevention methods to the most comprehensive “permanent housing” tool) with a heterogenous population with diverse needs. But is the system accomplishing what it has set out to do? Is it, in fact, reducing homelessness to the extent possible? Das measures this by asking a counterfactual question: would the outcome have been better if different households were matched with different interventions? What the paper does is a reshuffling of cards—simulating different matching scenarios and evaluating the aggregated numbers of the homeless in each hypothetical case. The proxy for continued homelessness is re-entry into the homeless system within two years. The proxy for how a household would behave as a response to a different intervention is how other households with similar characteristics have historically behaved as a response to that particular intervention. Running a simulation with these proxies and a better optimization method, the reentry to homelessness does indeed drop to 37% from the actual 43%.
Put simply, the current system is offering too much assistance to someone who needs only a little and offering too little to someone who needs more. There is, Das argues, a huge number of households that could be helped by tweaking the algorithm for better matches. He adds, “The right approach is then to specify appropriate optimization goals, arrived at through the social processes of policy-making, which could be based on both efficiency and equity considerations.” The juxtaposition of efficiency and equity strikes me in this formulation. In our discussion on Automating Inequality, we dwelled on how some algorithmic tools conflate past marginalization with future high-risk status—hence perpetuating a feedback loop where marginalized people get more marginalized. That is to say, we talked extensively about bias, but perhaps not enough about efficiency, even though the two seem to be closely linked. Elsewhere in the paper, Das relays a striking finding—that when the inefficiencies are fixed in the allocation system, those who are helped more seem to be “those who stand out as being more in need.” Then I have to wonder: is inefficiency a form of bias? Is bias a form of inefficiency? How and where does bias occur separately from inefficiency?
This also gets back to one of the questions our reading group asked in our earlier discussion: is bias a factor of the design of algorithms or a factor of their implementation? The problem may very well be at the design level, where allocation designers choose to collect information on certain variables in trying to assess (e,g., the creditworthiness of a household) or in the way they code these variables (e.g., based on their assumptions about what kinds of living, housing, and parenting are proper). It seemed to me that in the two papers we read, the implication was that the inefficiency problem occurred at the level of implementation instead, since Das worked with the same design and data as provided to him by the homelessness service. Collecting different data would require a new set of eyes—new questions to ask the population, hence new qualitative research. This brings me to my next question: how would Das’s quantitative work, which improves upon the system’s existing quantitative approach, interface with qualitative research?
For instance, Das describes a very interesting instance of making an adjustment to his optimization algorithm for equity and fairness purposes. As a result of optimizing the allocation, Das suspects during the simulations, some households might have moved too much down the ladder of help, which would create an undue fairness issue. To correct for that, Das goes back to the algorithm to add a constraint for how much a singular household can move up and down as a result of the optimality adjustment. This struck me as a fundamentally qualitative kind of work on Das’s part—and endeavor to ask whether the quantitative work has fairness consequences that the algorithm may have been blind to. The questions, then, are compounded: is qualitative work reserved for human eyes that need to keep watch on harmed groups and constantly add constraints to the algorithm? Can we teach the machine to detect equity issues? If we are able to do so, doesn’t the defining and teaching of equity still constitute qualitative work? Is Das’s example getting to an answer to the question of how we fix bias—constant monitoring of the algorithm both qualitatively and quantitively, human and machine alike?
Das’s papers sing the praises of fixing small inefficiencies. As he puts it, “Small efficiencies in keeping people housed yield disproportionately large reductions in homelessness.” I am struck by the humility of this statement, how it presents the optimizer’s work as important and modest at the same time—not always the language we encounter in the worlds of data, machine learning, and algorithms. But this, I can’t help but notice, stands at odds with the assured title of the same paper, “Solving Homelessness.” The way I use “solve” in everyday life might not be the same as, for instance, “solving for x” in mathematics—or perhaps it is. In any case, this makes me wonder: can optimization really solve homelessness (or solve for homelessness)? Is there a place where inefficiency-fixing cannot go?
Veena’s discussion notes:
Das responded to these fascinating issues by first acknowledging that to speak of solutions of homelessness is full of problems citing one of his students who was asked if the problem of homelessness could be solved by 2020; she replied that under a particular definition of homelessness one could claim that but then new problems will come rushing in 2021. The further point Das made was that it is becoming increasingly clear that there are different notions of fairness and it is mathematically impossible as the various impossibility theorems show, to reconcile these different notions of fairness into one grand theory. For example statistical fairness might conflict with fairness to the individual. Hence human deliberation and judgement is key to understanding how to make the debate on fairness and justice in the case of homelessness operative in a given contest. Finally, one of the points Das emphasized in response was that whichever interventions you make, there will be some people who will be adversely affected by the intervention. So there is a central role for rights to appeal and efforts to modify the algorithms in view of the actual experiences of hose adversely affected. The final decision on particular cases can only be taken by those who are actually working with the homeless. This is why Das explained that he likes to work with those who have genuine stakes in the problem – transplant surgeons in cases of algorithms for kidney matches; case workers and public health specialists for working on the homeless and so on.
In the question & answer session that followed, there was discussion around four main issues. First, how does one take care of the bias in the data given that the data on the extent and type of homelessness was filtered through the case worker’s decisions. Second, what was the rationale for taking a two-year duration and would it affect the findings if the duration was reduced to one year or extended to three years, for instance. Third what was the role of counterfactuals in the model—were these equivalent to the thought experiments in philosophy that were useful for clarifying a thought? Fourth, what kinds of systematic changes was the algorithm suggesting? Was there some way to identify specific types of households whose outcomes were improving?
Das responded that, indeed, it had taken them a whole year to clean the data and that the data available from these records links homeless service records with requests for assistance through a regional homeless hotline. By using the administrative data on a weekly basis for 166 weeks and using counterfactual data to ask if a household would have reentered the homeless system within 2 years, they found that their model was well-calibrated. They used a two-year period because using a one-year period generated data that was very noisy while a three-year period had too many variables. The paper, Das said, was in the nature of a proof of concept and a case study, meant to generate further discussions of fairness and ethical issues and long-term dynamics of systems that use this kind of predictive modules. At the same time, he said, since current practices of allocation into different kinds of housings were not evidence-based, there was need to have widespread discussion of these kinds of issues among different constituencies. On the question of the ability of the algorithm to identify specific types of households that were improving Das responded that their initial attempts to find such households did not yield a “nice” characterization and he took the suggestion that they might want to do a baseline comparison with a random allocation to see how different outcomes would be under the current mechanism from a random one. There was some general discussion of how qualitative methods might be added to these models and Das responded that they did plan to interview caseworkers but not until they had very well-defined questions and they could generate some resources to help with the work of the caseworkers (e.g providing money for additional personnel for the hotline which was facing budget cuts). The caseworkers were very overworked and often very stressed with the pressures of work. But his colleagues and he would love to see some ethnographies of how case workers actually made decisions on allocations—what were their thought processes?
Overall, the papers generated a very lively discussion across boundaries of various disciplines showing that faced with urgent societal issues, different kinds of methodologies and theoretical preoccupations can be effectively calibrated to address issues of ethics.
3/24/2019 | Privacy and Data: A Discussion with Prof. Anita Allen | “Data Sciences and Society” Reading Group
Discussion notes by Canay Özden-Schilling, Naveeda Khan, & Veena Das:
This meeting marked the first time our reading group hosted a guest presenter, a practice we’ll continue at our next meetings this semester. We were fortunate to be joined by Prof. Anita Allen from the University of Pennsylvania, a lawyer and philosopher of law with a longstanding interest in and distinguished contribution to the theory of privacy. For the meeting, our 21 attendees read recent articles by Allen herself, Ian Kerr, and Helen Nissenbaum—all from a recent journal issue exploring the changing meanings of privacy in our data-laden moment.
In her opening discussion, Prof. Allen laid out a recent history of privacy law along with the evolution of her own thinking on the matter. Prof. Allen’s interest is in a comprehensive theory of privacy goes back to the 1980s, a pre-Internet moment when the chief privacy concerns centered on reproductive health, the right to die, and LGBTQ rights. In Uneasy Access, her first book and notably the first book-length treatment of privacy, Allen championed privacy as a women’s right—a contentious notion for feminists to whom privacy seemed to be a dangerous vehicle for the cover-up of oppression. Regardless Allen has held on to the notion that privacy as an inalienable part of freedom and is impassioned in her belief that liberal states can never allow its neglect or sacrifice.
By the same coin, Allen also argues that people in liberal states have the responsibility to protect their own privacy. This notion, as Allen pointed out, has been endlessly complicated by the emergence of the Internet and the rampant voluntary sharing of data online. In our Big Data moment, of course, the individual protection of one’s own data no longer seems straightforward. These considerations helped Allen’s thinking evolve away from her original conception of privacy in Uneasy Access, which focused on limited access to individuals’ information, towards a more positive theory that incorporates the rights to measures that will secure individuals’ privacy, to be secured by governments. We opened the floor for discussion on this note.
We began our general discussion with the article we read by Allen, which takes up a 2017 decision by India’s Supreme Court (Justice K.S. Puttaswamy vs Union of India) on the controversial biometric identification system, Aadhaar, but the constitutional question before the court was if privacy was a constitutionally protected right given that the Constituent of India does not specifically mention privacy as a right. The Court ruled that privacy was a fundamental right derivable from the constitution but in its application to Adhaar it did not strike off the controversial Adhaar bill though it struck off some of the government notifications on the necessity to produce Aadhar care for receiving services. In the course of the judgment The Court cited a number of North American and European philosophers of privacy including Allen herself. In the article, Allen discusses the influence that philosophers might have on the evolution of privacy’s definitions. Some of our participants pointed to the grounding of the decision in jurisprudential thinking in India (despite citations of Allen and other legal philosophers) since the court derived the constitutional right to privacy from an expanded notion of life. It was pointed out that this expanded notion carried particular relevance in India since the Bhagwati decision during the National Emergency had held the right to life as a gift of the state and not as a natural right. This prompted Allen to specify her interest in the case as that of an American scholar, to whom this decision is a hopeful illustration of what longstanding liberal ideas can accomplish today and how old privacy theory can be repurposed to address our data-laden problems. Allen expressed her satisfaction for the Court’s recognition of poor people’s rights to privacy, but lamented the missed opportunity of defining privacy as a positive social right. It still felt to certain participants that this was a missed opportunity on Allen’s part to acknowledge that privacy is not simply an import from the Western liberal tradition but may truck with other adjoining notions—such as those of propriety, appropriateness of boundaries—within other traditions as well as a result of years of activism against the harassment by the state that the rules for use of Aadhar had entailed. The need to think not in terms of a concept and its widening field of application but lateral relations among concepts to enrich thinking and analyzing of our present was also an important consideration.
Following the interests of many in the group, our discussion then moved generally towards privacy in the biomedical and clinical fields. Allen took a question on the complications of drawing the boundaries of privacy in genomics research where one’s volunteered genetic data also pertain to those of family members. Other participants addressed the hypothesis from Ian Kerr’s article that human cognition is not a sine qua non to privacy violation—that an AI cognizer can also violate privacy. One of our medical practitioner participants argued against this hypothesis by pointing out that the bots that determine a patient’s need for vaccinations, for instance, have been privacy-enhancing. Other participants countered this point by questioning how capable or willing institutions have historically been to limit AI’s applications to their intended uses. Allen offered that in the medical field, as elsewhere, the legal definitions of privacy will have to compete with practitioners’ cultural definitions, which might not always neatly map onto institutional expectations—that people always find a way to get around privacy even though they are aware of its importance and the legal protections afforded to it. She is curious why people work around privacy, what compels them, and called on anthropologists to offer perspectives on this tendency with different case studies. Our participants also pointed out that this context-specific nature of privacy echoed the work of Helen Nissenbaum, who, in the third article we read for the event, developed the notion of privacy’s “contextual integrity”—that privacy can only ever be grasped with reference to the specifics of the “sender, recipient, subject, information type, and transmission principle.”
We circled back to what can be accomplished in the legal field to address global privacy concerns. Allen gave vivid examples from recent bioethical panels she has participated in in the U.S. and the E.U. to highlight how philosophers can help shape this debate. The challenge, she pointed out, is not to pass just any privacy law—the U.S., for instance, has more privacy laws in place than anywhere else. The challenge is to get an adequate law enacted in a timely fashion given the fast evolution of the technologies we are trying to monitor. We thank Prof. Allen for sharing with us her globally attuned perspective on the intersection of law, privacy, and data.
2/13/2019 | A discussion of Virginia Eubanks’s Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Press, 2018) | “Data Sciences and Society” Reading Group
Discussion notes by Naveeda Khan:
I enjoyed reading this book. I was moved by it in many ways, by its attention to the specters of the poor in American history, its nuanced profiles of the poor as singular and collective in turn in contemporary U.S., and its serious call to social action. It is not often that I read scholarly books that diagnose our present so presciently, showing how inequality is being scaled up and up and up, how it has become almost inexorable, and takes up the call to keep up the good fight. Others are made much more breathless by the scaling, marked by their handwringing. Although I would not put Cathy O’Neil’s Weapons of Math Destruction in this last category, I did think that she seemed to have less interest in historical and social context, which led her to provide only a few portraits of affected people who could do little but feel stunned by the wizardry of the new algorithms ripping through their lives. In Eubanks one could see that even as the ambitions to automate social services got more elaborate and algorithms more complex, they were still embedded within institutional settings and political contexts, driven by state and individual interests and desires, and informed by longstanding biases that hadn’t gone away with automation but had become more embedded within its structures.
But praises and comparisons aside it was precisely Eubank’s focus on the history of the poor that garnered her some criticism at the start of the discussion on her book at our Data Sciences and Society Reading Group last Wednesday. What is so new about this moment if it is only an extension of how the U.S. has always dealt with its poor? Why isn’t this more modestly claimed as a study of state bureaucracy? And if there isn’t anything new about this moment then why even bring up “high-tech tools” in the title? Why not go further into the hardware and design aspects of these tools that O’Neil at least does? We decided that some of the promise of the title was realized in chapter four of Automating Inequality, “The Allegheny Algorithm” in which Eubanks provides the three key data based components that comprise the Allegheny Family Screening Tool (AFST), notably, outcome variables, predictive variables and validation data. At the same time the author’s focus in insistently upon what is sought within existing bodies of data and how (controversial mining practice of only seeking out highly correlated data points of statistical significance) rather than where does this data come from in the first place, who has access to it and for what varied purposes.
While speaking to designers of algorithms and automated systems would gain us ready insight into their healthy doubts and skepticism about their products, this book wasn’t about tooling and creating error free systems by a few artisanal designers. It is about how such tools, which, ready or not, were captured, operationalized and managed by states. It provided the dimension of the state that we missed in O’Neil in whose book encounters between individuals and systems that fail them happened more haphazardly, whereas with Eubanks’s introduction of the dimension of the state the aggregate effect of such automation came into clearer focus.
Our discussion of Eubanks kept returning to the question of how does the past endure into the present? Is it really the case that history holds as unchanging, the return of the same, the poorhouse made into the digital poorhouse in the present? At the level of the U.S.’s proclaimed ethos of self-help and ambivalent attitudes towards the poor, one can make an argument for continuity as Eubanks does but at the level of the relationship between tools and states there are discernible shifts. If tools were created in the past to purge the numbers of poor dependents, it was initiated by states seeking a variety of ends, from the elusive search for efficiency to trying to help people. But now tools are created within the context of a widespread suspicion of government. Thus the work of automation now is not just to cut the welfare rolls but to render government irrelevant. This felt to us as an important difference from the past as it makes automation more hydra-headed, directed at the poor and at government, raising the question of what else is under attack and being undermined.
We returned to our vexation with the anti-technical bias of the book, for instance the lack of acknowledgment that centralization may be beneficial in some instances, and the need for Eubanks to have larger data sets for her research to nuance her denunciatory stance. Perhaps then we would find that high tech tools also serve the poor but different groups within them or through appeal to different aspects of their identities other than as poor, perhaps as white, male, vulnerable, pleasure-seeking individuals? Questions were raised about communities of interest within Reddit, sub Reddit, the dark web, who may be on welfare and are facing the negative effects of automation, but who have other dimensions to their lives. It is salutary that she gives the poor a face here but do they seek only this face?
Finally, the one chapter that went into any depth into the actual mechanics of an automated system, chapter four, captured a different kind of fear than automation, impersonality of services, break down of care without any possibility of human intervention and triage. The chapter also captured the fear of being modulated against one’s wishes or even conscious knowledge through interface with the machine. We got the example of the welfare counselors who started to question their judgment in terms of determining the risk level of children within families if their scores were far off from those generated by the machine with its deep backlog of data. Instead of questioning or even overriding the machine’s decision counselors began to doubt themselves and rerun evaluations to see if they could match the machine. This self-modulation, this fear of having one’s insides reshaped, offered a moment in which the present was not merely a recapitulation of the past but something new, unknown, and potentially terrifying. On reflection, this subject deserves more attention than Eubanks gave it. Our discussion sub-group, mostly anthropologists and historians, several with STS interests, and two clinicians, felt that we would like to encourage the participation of those with interest in neuroscience and cognitive science to understand and diagnose this fear of manipulation that includes the manipulation of so called inner selves.
Discussion notes by Jeremy Greene:
I agree with Naveeda that Eubanks has produced a remarkable book added greatly to the collective conversation of our interdisciplinary reading group, especially layered onto our recent reading of Cathy O’Neil’s Weapons of Math Destruction. Eubanks’ book is elegantly structured, well-written, and compelling, and it has the potential to engage with broad popular and policy audiences. As Naveeda describes, Eubanks is able to capture in her case studies a nuanced sense of the historical and social context in which empirical knowledge about poverty is used to frame institutions that continue to separate and stigmatize. Several people in our discussion group of historians, anthropologists, clinicians, and bioethicists wondered, however, why Eubanks was not willing to look under the hood and show the reader how, exactly, these algorithms worked, in the way that O’Neil seemed consistently eager to do. Were she present in the room (as we hope she may be in a future Sawyer Seminar event), we would have like to ask her how she might productively open the “black box” and expose the innards of the technologies she describes. Better yet, perhaps, would be to put Eubanks and O’Neil in conversation with one another, as each seems to have a piece of the puzzle which the other lacks: where O’Neil really could benefit from more engagement with historical and social context, Eubanks could benefit from more engagement with the workings of the technologies themselves.
Our discussion ended with an open question regarding known knowns, known unknowns, and unknown unknowns regarding algorithms and inequality. On the one hand, how do we learn the answers to questions we already knew to be important? E.g., changing definitions of privacy, rising saturation of data surveillance, the encoding of prior biases through computation, etc. On the other hand, what new questions might emerge from these engagements? E.g., how are new collectives being formed through these technologies? Whose voices are amplified through techniques of big data and machine learning, and whose are silenced? What forms of labor are being displaced, and what new forms are emerging? How do we attend to the changing interfaces through which people become data and/or have their understandings and future behaviors shaped?
Discussion notes by Veena Das & Canay Özden-Schilling:
Our group’s discussion on Automating Inequality centered on the relation between human bias and bias introduced into decision models based on predictive algorithms. The book demonstrates how bias weaponizes these tools against poor populations. Is the problem with the design of these algorithms—e,g., the choice of certain variables, the omission of others, discriminatory assumptions about proper ways of parenting and organizing domestic space? In what ways does a certain (overwhelmingly white, middle-class) demographic become the standard for evaluating those who are at risk in decisions to determine, for instance, eligibility for social services, allocation of scarce resources over housing, and identifying which children are at risk of abuse in their home environment? Or do the problems arise from poor implementation? We found that this was an empirical question to explore that left us wanting more extensive qualitative research as well as development of mixed methods for opening up a wider set of issues. For instance, how would race play as a factor if the sites chosen for analysis of documents and interviews included poor black neighborhoods? Could one use the qualitative research as generating further hypotheses for designing surveys over a random sample of households to determine the weight of different variables as the decision models are implemented at the local level? Since the population on whom the research was conducted consisted of families who were already under surveillance either because of their own needs to access social services, or were reported for child abuse or for minor crimes, it would not be possible to assess whether there were endangered children in families who had not come under the eye of the social service or criminal justice apparatus. These comments were not offered as criticism of the book per se but as issues that arose from the study.
Several of our participants were intrigued by the book’s portrayal of continuities in poverty management and discrimination against the poor from prior eras to our current moment. Can punitive resource allocation be attributed specifically to the work of algorithms? By the same coin, are algorithms simply the henchmen of neoliberal governance? Some of us have pointed out our experience with Big Data practitioners and students who believe in the unprecedented revelatory powers of Big Data—that it allows us to see reality in ways never seen before. But if, as Virginia Eubanks argues, the digital database is a continuation of yesteryear’s poorhouse (except now scaled up and everlasting), what does that mean for the specificity of our Big Data moment? We thought that one answer might have to do with contemporary processes of data collection—the scale of surveillance, voluntary vs. involuntary sharing of data, and ownership of one’s data. We returned time and again to the relationship between Big Data and ethics. We agreed that the two were necessarily bound up with one another and that there was no way to extract the social from Big Data to arrive at neutral tools. There could be no universal ethical framework for the design and implementation of Big Data; a simple plea to return to human judgment and banish the machines wouldn’t do either. Bias in data and algorithms can take diverse forms; the ends of manipulation, be it by governments or corporations, are not uniform either. One possibility is that different kinds of questions arise over different scales of data – thus, for instance, the questions arising from population-level genomics might be very different from questions arising from the level of data contained in individual clinical records, or files in the criminal justice system. While in the current milieu questions of ethics seem closely tied up with distributional questions related to fairness and justice, were there other regions of ethics that could be unearthed from other traditions of philosophy, bioethics, and the social sciences? We are looking forward to continuing to explore this variety and the futures of resource allocation in future meetings.