Crowd2Map Tanzania is a crowdsourced initiative aimed at creating a comprehensive map of rural Tanzania, including detailed depictions of all of its villages, roads and public resources (such as schools, shops, offices etc.) in OpenStreetMap for everyone to use.
EyesOnALZ, a project led by the Human Computation Institute, will be among four projects featured in the premiere episode of The Crowd and The Cloud, a National Public Television mini-series by the creators of Cosmos.
By turning complex laboratory analysis into a game (Stall Catchers) that anyone can play, EyesOnALZ has made it possible to accelerate Cornell’s promising Alzheimer’s disease research to compress decades of inquiry into just a few years.
Funded by a grant from the BrightFocus Foundation, HCI has been collaborating with Cornell, Berkeley, Princeton, WiredDifferently, and SciStarter to develop a platform for crowdsourcing the AD research being done at Cornell University.
“Stall Catchers” will allow participants to look at movies of real blood vessels in mouse brains, and search for “stalls” – clogged capillaries where blood is no longer flowing. By “catching stalls” participants will be able to build up their score, level up and compete in the game leaderboard, as well as receive digital badges for their various achievements in the game.
Most importantly, the game will enable the crowdsourcing of promising Alzheimer’s research at the Schaffer-Nishimura Laboratory (Cornell Dept. of Biomedical Engineering). There, recent breakthroughs have been made in understanding the role of reduced brain blood flow in AD, and reversing some of the Alzheimer’s symptoms, such as memory loss and mood changes, by targeting blood vessel stalls.
Stall Catchers has been built on the existing platform of stardust@home – one of the first volunteer thinking projects. Just as aerogel images were being studied in stardust@home to look for interstellar dust particles, in Stall Catchers a “Virtual Microscope” plays back vessel movies – images of consecutive layers of a live mouse brain. Participants analyze one microscopic vessel at the time, looking for signs of stalls.
The game, launched Oct. 1, 2016, is expected to remove the current analytic bottleneck of blood flow analysis in AD, and accelerate the research towards promising AD treatment candidates.
This report describes workshop activities and findings, identifies key research areas for further exploration, and calls for a National Human Computation Initiative with policy and funding support at all levels to advance the science of participatory systems.
The report was co-developed by the workshop organizers, including Pietro Michelucci of the Human Computation Institute, Lea Shanley of the The Woodrow Wilson International Center for Scholars, and Janis Dickinson and Haym Hirsh of Cornell University. It also includes important contributions from many of the workshop participants.
MIT Technology Review reports on the workshop here.
Human Computation Institute is leading an initiative to develop an online Citizen Science platform that will enable the general public to contribute directly to Alzheimer’s Disease research and possibly lead to a new treatment target in just a few years.
It has long been known that reduced blood flow in the brain is associated with Alzheimer’s Disease and other forms of dementia. However, new imaging techniques have enabled our Cornell-based collaborators to make important discoveries about the mechanisms that underlie this reduced blood flow.
These findings are suggestive of a new treatment approach that could reduce cognitive symptoms and halt disease progression. However, arriving at a specific treatment target based on these findings requires additional research. Unfortunately, the data curation required to advance these studies is very labor-intensive, such that one hour’s worth of collected data requires a weeks worth of annotation by laboratory personnel. Indeed, the curation aspect of the analysis is so time consuming that to complete the studies necessary for identifying a drug target could take decades.
Fortunately, accurate curation of the data, though still impossible for machines, involves perceptual tasks that are very easy for humans. We aim to address the analytic bottleneck via crowdsourcing using a divide-and-conquer strategy. The curation tasks in this research map closely to the tasks used in two existing citizen science platforms: stardust@home and EyeWire, both of which have enabled discoveries reported in the journal Science. In direct collaboration with the creators of these highly successful citizen science platforms, we are developing a new platform for public participation that we expect will reduce the time to a treatment from decades to only a few years.
If you are interested in participating in an online activity that will directly contribute to Alzheimer’s research, please pre-register here. By participating, you will not only help speed up a treatment, but also understand more about the disease and exactly how your efforts make a difference.
“Wicked problems“, such as climate change, poverty, and geopolitical instability, tend to be ill-defined, multifaceted, and complex such that solving one aspect of the problem may create new, worse problems. Thus, trying to engineer solutions to such problems is exacerbated by our inability to measure overall improvement. For example, we may observe economic improvements in a locale where we have implemented a poverty mitigation strategy, only to discover later that we have inadvertently introduced health issues that ultimately result in even worse financial burdens related to the consequent healthcare needs. And there may be other deleterious side-effects that are difficult to measure or that we fail to attribute to our implemented solution. So what can be done?
Given our interest in applying human-based computational (HC) systems to solving these problems, we might gain insight from computational complexity theory. This theoretical branch of computer science seeks to classify computational problems based on how difficult they are to solve. It answers questions like, “How long would it take (at best and worst) to find the shortest path that visits each of these eight U.S. cities exactly once?” It turns out that problems such as this one, canonically referred to as the “Traveling Salesman Problem“, are considered computationally intractable. Indeed, for the Traveling Salesman Problem, the processing time for the best known solution increases by a factor of n22n. In practical terms, a computer that takes one minute to find the shortest path between eight cities would take more than 11 years to find the shortest path between thirty cities. But here’s the rub: there is no way to verify the solution other than to check it against all other possible paths, which is tantamount to recomputing the solution.
Just as the Traveling Salesman Problem is intractable for machines, wicked problems are intractable for humans. The problem complexity for wicked problems is very high, but so is evaluating their solutions. For such problems, assessing improvement broadly can be just as vexing as engineering a solution in the first place. However, there is another version of the Traveling Salesman Problem for which solution verification is more tractable. Instead of asking for an optimal path, this version asks a slightly different question: “Is there a path shorter than x that visits each of these cities once?” This version of the problem does not materially change its difficulty classification, but it does make the solution much easier to evaluate. Indeed, verifying the solution involves simply adding up the distances along the provided path and comparing the result to the mileage budget (x). In this case, if my budget of allowed miles is 300, then we just need to verify that the provided itinerary is within-budget – that is, ensure that the generated path that visits each city exactly once is less than 300 miles. In computational complexity theory, intractable problems for which solution verification can be performed quickly (i.e., in polynomial time) are considered “NP-complete”. Another example of a problem that is NP-complete is prime factorization. Today’s computers take a very long time to factor large numbers, the latency of which makes common encryption methods secure. However, once a number has been decomposed into its prime factors, it is easy to verify the solution by simply multiplying the factors together and comparing the result to the original number.
Perhaps a first and more attainable step to solving wicked problems would be to reframe the problem in a manner analogous to the modified version of the Traveling Salesman Problem described above. Instead of seeking an optimal solution to a wicked societal problem, perhaps we should seek a solution that simply meets the yes/no criterion of “would this be an improvement over what we have now”. But how do we broadly measure such improvement? Could human computation be used to enable tractable solution verification for wicked problems? Enter the “Popsicle Index”. Catherine Austin Fitts has proposed a very simple metric for a community’s well-being based on the percent of community members who “believe that a child can leave their home, go to the nearest place to buy a popsicle or other snack, and return home alone safely”.
This index is remarkable for several reasons. First, it cleverly captures a multitude of individual effects, confluent effects, and n-order effects. For example, the child’s safety could be related to traffic risks, a bad element hanging around the store, or even the perceived risk of chemicals in the purchased snack. I would go so far as to call it a “wicked metric” because trying to understand the full set of factors and interactions that govern the Popsicle Index might itself constitute a wicked problem. Second, the Popsicle Index is an HC approach to solution assessment that involves crowdsourcing and aggregating subjective human assessments. Though such assessments might be of sufficient complexity to be considered intractable for machines, due to differences in computational strategies between humans and machines, such problems are often strong candidates for HC solutions. Third, and perhaps most germane to this discussion the Popsicle Index, as a straight-forward window into community well-being, could facilitate solutions to wicked societal problems by reclassifying those problems from intractable to NP-complete. Just as with the NP-complete version of the Traveling Salesman Problem, we may not have a quick way to know if we have an optimal solution, but we might at least have a sense of whether we are making things better or worse. Indeed, the prospect of such comprehensive measures of improvement may bring us one step closer to addressing societal challenges.
What other “wicked metrics” might we explore?
– Pietro Michelucci
Acknowledgments: I am grateful to Christina Engelbart for bringing the Popsicle Index to my attention and to Jordan Crouser for corrections and feedback that materially improved this article.
 H. Rittel and M. Webber, “Dilemmas in a General Theory of Planning,” Policy Sci., vol. 4, pp. 155–169, 1973.
 “Wicked problem,” Wikipedia, the free encyclopedia. 08-Jan-2015
 C. A. Fitts, “The Popsicle Index – who makes it go up? who makes it go down?” Solari Blog.
Abstract (the full paper is available here as a PDF)
Humans are the most effective integrators and producers of information, directly and through the use of information-processing inventions. As these inventions become increasingly sophisticated, the substantive role of humans in processing information will tend toward capabilities that derive from our most complex cognitive processes, e.g., abstraction, creativity, and applied world knowledge. Through the advancement of human computation – methods that leverage the respective strengths of humans and machines in distributed information-processing systems – formerly discrete processes will combine synergistically into increasingly integrated and complex information processing systems. These new, collective systems will exhibit an unprecedented degree of predictive accuracy in modeling physical and techno-social processes, and may ultimately coalesce into a single unified predictive organism, with the capacity to address society’s most wicked problems and achieve planetary homeostasis.
What is a wicked problem?
“Wicked problems” are intractable societal problems (e.g., climate change, pandemic disease, geopolitical conflict, etc.), the solutions of which exceed the reach of individual human cognitive abilities. They are often multifaceted, involving multiple systems such that a solution that benefits one system (e.g., Earth’s ecosystem) may harm another (e.g., the global economy). Furthermore, viable solutions tend to be dynamic, adaptive, and ongoing.
Human computation to the rescue
Human computation (HC), which encompasses methods such as crowdsourcing, citizen science, and distributed knowledge collection offers new promise for addressing wicked problems, by enabling participatory sensing, group intelligence, and collective action at unprecedented scales. Ironically, many of the wicked problems we face today have resulted from unintended manifestations of collective behavior (e.g,. car pollution). Thus, we now seek to harness and improve this crowd-powered capability as a fitting remedy.
A good idea is a seed, not a solution
Despite such promise, transformative new ideas are not enough to solve real-world problems. Indeed the Human Computation Institute employs a practice of four-way teams with end-to-end partnerships to ensure that initiatives will have a high likelihood of success in terms of quantifiable and meaningful societal benefit.
Candidate projects often begin with an articulated need and prospective HC approach. However, because HC, like Computer Science, is an enabling technology, developing new methods and associated capabilities often requires domain expertise (e.g., Genomics) in the problem space. Furthermore, the breadth of HC methods and approaches often requires input from contributing disciplines to address, for example, participatory approaches, workflow architecture, and complexity analysis. The marriage of domain experts with HC specialists thus ensures a core research team with relevant competencies and technical perspective.
Broad community reach
The Human Computation Institute, with its broad network of advisors, affiliates, and partner organizations is both a community leader and agora for HC researchers. As orchestrator of a 117-author collaboration leading to the field’s most extensive reference text, public driver of a national HC research agenda, and as publisher of the only dedicated scholarly journal for Human Computation, the Institute has broad access across domains to the world’s leading HC researchers and relevant domain specialists. This broad participation base ensures a high degree of selectivity and specificity in forming research teams that are suited to both to the problem space and technical approach.
Mission-aligned funding partners
Partnership is a core tenet of the Human Computation Institute, and never more so than in building effective and enduring relationships with mission-aligned funding partners. Though it may seem like a philosophical nuance, this partnering perspective is critical to outcome-based successes. Funding entities live and breathe in the space of purpose, often reflecting a careful assessment of societal needs and relevant solution gaps. Such an awareness can critically inform development and implementation approaches. Thus, embracing funding organizations and their program directors, not only as essential financial resources, but as key strategic partners helps keep solutions aligned with zones of maximal impact.
A capability becomes a solution only when it has been effectively implemented in the real world to address a specific need. Implementers are applied domain experts, such as humanitarian workers coordinating relief efforts, or first-order stakeholders, such as medical researchers analyzing huge datasets. Ultimately, these implementation partners are needed to deploy and administer new HC capabilities in service of societal betterment. Because of their extensive practical knowledge, they are also essential members of a solution team during concept development, to inform interaction design and reconcile unanticipated differences between theoretical expectations and real-world dynamics. Early involvement from an implementer can reduce the number of iterations to prototype maturity and accelerate deployment readiness by building relevant competencies in parallel with the associated HC technology.
Four cornerstones for success
Any new initiative at the HC Institute requires a human computation specialist, domain expert, funding partner, and implementer. Anything less than a collaboration formed from these four cornerstones is by our definition not a solution team. As an innovation center, the HC Institute encourages creative approaches to unsolved problems, which may involve speculative methods. However, the four cornerstone approach reduces other R&D risks to maximize the likelihood of high-impact beneficial and enduring outcomes.
Fundamental research along the way
Fundamental research is critical to the advancement of Human Computation and, indeed, consistent with the mission of the Institute. Theoretical advancements reduce application development time and improve the validity and repeatability of HC methods. Because the field is still young, solution development often necessitates the new development or extension of HC methods, which requires associated empirical studies. The Institute holds that any such findings should be shared broadly with the HC research community for the benefit of the field, which in turn supports more effective HC solutions.
Rigor, passion, and duty
A solution team is itself a human computation system, and as consumers of our own findings, we recognize that key predictors for collaborative success in participatory systems are goal and value alignment. While the Institute maintains an ethos of rigorous empirical doctrine and the highest scholarly standards, the institute’s partnering activities are driven by individual passion – passion in each solution team member both for the target societal outcome and their individual contributions to that outcome. After all, a success for the solution team is a win for Team Humanity.
Founding Director, Human Computation Institute
 H. Rittel and M. Webber, “Dilemmas in a General Theory of Planning,” Policy Sci., vol. 4, pp. 155–169, 1973.
HC Institute director, Pietro Michelucci, led a multidisciplinary group of world experts in the emerging field of “human computation” in Washington, DC last week to consider the unprecedented capabilities that might arise from crowd-powered systems and to map out the research needed to get there.
The workshop itself employed human computation techniques in service of its own goals, such as participatory gaming, workflow execution, group composition, and interaction mechanics. Through these methods, workshop participants explored human motivation in participatory systems, worst-case scenarios, mapped out high impact success cases, and iteratively developed new human computation solutions to societal problems to help identify research gaps and inform related national policies.
The three-day workshop, held at the Woodrow Wilson International Center for Scholars, was proposed and co-organized by the Human Computation Institute, Cornell University, and the Wilson Center, and funded by the Computing Community Consortium (CCC). A workshop report, under development by the co-organizers and community of participants, is expected to be published by the CCC in early 2015.
*** UPDATE: workshop report now available here ***
Event coverage links: