摘要 |
A feature extraction unit (22) extracts a feature index (Xn) of a singing voice (V). An impression identification unit (24) calculates an impression index (Ym) of the singing voice (V) by applying the feature index (Xn) extracted by the feature extraction unit (22) to a relational expression (Fm) that has been set utilizing multiple items of reference data (r), in which an impression index (ym) indicating an auditory impression of a reference sound and a feature index (xn) indicating an acoustic feature of the reference sound are inter-associated, and has been set utilizing relationship descriptor data (DC) stipulating correspondence relationships between auditory impressions and multiple acoustic features; said relational expression (Fm) representing the relationship between the impression index (Ym) of auditory impressions and feature indices (Xn) of multiple acoustic features in the correspondence relationships specified by the relationship descriptor data (DC). An information generation unit (32) generates presentation data (Q) corresponding with the impression index (Ym) identified by the impression identification unit (24). A presentation processing unit (26) presents the user with the presentation data (Q) generated by the information generation unit (32). |