||We describe the design of human-machine conversation experiments that support the evaluation of our contextaware approach in coalition decision making at or near the network edge. The studies, named SHERLOCK (for Simple Human Experiments Regarding Locally Observed Collective Knowledge), involve humans participating in simple intelligence, surveillance, and reconnaissance (ISR) tasks in physical environments, using mobile devices in situ, or online. Experimental tasks are undertaken by multiple participants operating as a coalition, with collaboration mediated by machine agents. We illustrate the SHERLOCK using two specific experiment designs. In each one, the participants' task is to locate a number of target individuals and identify their specific features. Experiment 1 is a crowdsourcing “whodunit” scenario involving intelligence in synthetic, natural, and hidden situations. All human input is via a conversational agent which mediates information sharing between participants. Experiment 2 is an ISR asset assignment scenario, using simulated sensing assets selected either by a human, via an algorithm, or a combination of the human and machine. Participants' success in both experiments depends on their ability to make effective use of the conversational system, including being able to provide interpretable information to the conversational agent, and to obtain information via the conversational interface to assist them.