The necessity for accurate speech recognition systems capableof handling adverse environments with multiple speakershas in recent years fueled speech separation and enhancementresearch [1, 5, 3, 2, 6, 4]. This has resulted in numeroustechniques with varying degrees of success, most ofwhich employ multiple microphones [1, 3, 2, 6, 4]. Beamformingtechniques, for example, utilize knowledge aboutthe direction of the speech source of interest in order to reducenoise from other directions. The resulting SNR gainis significant as long as a large number of microphones areavailable [6, 4]. Independent Component Analysis (ICA),on the other hand, is capable of producing large SNR gainswith few microphones [1, 3]. However, ICA has severallimitations that have hampered its application in real-worldsituations [1, 6].Interestingly, both of these popular techniques (as wellas many other speech separation techniques) are similar inthe sense that they are not specifically designed to functionfor speech signals. Speech has certain characteristics whoseutilization can provide a significant edge in the de-noisingand signal separation tasks [2, 5].In this paper, we extend the phase error filtering techniqueinitially proposed in [2] to include the magnitudesof the two microphones in addition to the phase information.This technique transforms two noisy time-domain signalsrecorded by two microphones into their time-frequency(TF) representations. For each time-frequency componentor block, a phase-error measure is derived from the informationin both microphones. Based on this, the time-frequencyblock for each microphone is scaled by a masking value betweenzero and one. Basically, TF blocks with large phaseerrorsare ‘punished’ by a small mask value (0) and TFblocks with small phase-errors are ‘rewarded’ by a largemask value (1).In the following sections, we formulate four different TFmasks and analyze them theoretically, through SNR-gainsimulations, and digit recognition experiments.