Sounds created by a periodic process have a Fourier representation with harmonic structure - i.e., components at multiples of a fundamental frequency. Harmonic frequency relations are a prominent feature of speech and many other natural sounds. Harmonicity is closely related to the perception of pitch and is believed to provide an important acoustic grouping cue underlying sound segregation. Here we introduce a method to manipulate the harmonicity of otherwise natural-sounding speech tokens, providing stimuli with which to study the role of harmonicity in speech perception. Our algorithm utilizes elements of the STRAIGHT framework for speech manipulation and synthesis, in which a recorded speech utterance is decomposed into voiced and unvoiced vocal excitation and vocal tract filtering. Unlike the conventional STRAIGHT method, we model voiced excitation as a combination of time-varying sinusoids. By individually modifying the frequency of each sinusoid, we introduce inharmonic excitation without changing other aspects of the speech signal. The resulting signal remains highly intelligible, and can be used to assess the role of harmonicity in the perception of prosody or in the segregation of speech from mixtures of talkers.