We explore electroencephalography (EEG), electrodermal activity (EDA), and electrocardiography (ECG) as valid sources to infer humor appraisal in a realistic environment. We report on an experiment in which 25 participants browsed a popular user-generated humorous content website while their physiological responses were recorded. We build predictive models to infer the participants’ appraisal of the humorousness of the content and demonstrate that the fusion of several physiological signals can lead to classification performances up to 0.73 in terms of the area under the ROC curve (AUC). We identify that the most discriminative changes in physiological signals happen at the later stages of the information consumption process, reflected in changes on the upper EEG frequency bands, higher levels of EDA, and heart-rate acceleration. Additionally, we present a comprehensive analysis by benchmarking the predictive power of each of the physiological signals separately, and by comparing them to state-of-the-art facial recognition algorithms based on facial video recordings. The classification performance ranges from 0.88 (in terms of AUC) when combining physiological signals and video recordings, to 0.55 when using ECG signals alone.