A User-Centered Design of a Personal Digital Library for Music Exploration (Full Paper)
David Bainbridge, Brook Novak and Sally Jo Cunningham
Abstract. We describe the evaluation of a system to help musicians capture, enrich and archive their ideas using a spatial hypermedia paradigm. The target user group is musicians who primarily use audio and text for composition and arrangement, rather than with formal music notation. Using the principle of user centered design, the software implementation was guided by a diary study involving nine musicians which suggested five requirements for the software to support: capturing, overdubbing, developing, archiving, and organizing. Moreover, the underlying spatial data-model was exploited to give raw audio compositions a hierarchical structure, and ? to aid musicians in retrieving previous ideas ? a search facility is available to support both query by humming and text-based queries. A user evaluation of the completed design with eleven subjects indicated that musicians, in general, would find the hypermedia environment useful for capturing and managing their moments of musical creativity and exploration. More specifically they would make use of the query by humming facility and the hierarchical track organization, but not the overdubbing facility as implemented.

Improving Mood Classification in Music Digital Libraries by Combining Lyrics and Audio (Full Paper)
Xiao Hu and J. Stephen Downie
Abstract. Mood is an emerging metadata type and access point in music digital libraries (MDL) and online music repositories. In this study, we present a comprehensive investigation of the usefulness of lyrics in music mood classification by evaluating and comparing a wide range of lyric text features including linguistic and text stylistic features. We then combine the best lyric features with features extracted from music audio using two fusion methods. The results show that combining lyrics and audio significantly outperformed systems using audio-only features. In addition, the examination of learning curves shows that the hybrid lyric + audio system needed fewer training samples to achieve the same or better classification accuracies than systems using lyrics or audio singularly. These experiments were conducted on a unique large-scale dataset of 5,296 songs (with both audio and lyrics for each) representing 18 mood categories derived from social tags. The findings push forward the state-of-the-art on lyric sentiment analysis and automatic music mood classification and will help make mood a practical access point in music digital libraries.

Visualizing Personal Digital Collections (Short Paper)
Maria Esteva, Weijia Xu and Suyog Dott Jain
Abstract. This paper describes the use of RDBMS and treemap visualization to represent and analyze a group of personal digital collections created in the context of work and with no external metadata. We evaluated the visualization vis a vis the results of previous personal information management (PIM) studies. We suggest that this visualization affords analysis and understanding of how people organize and maintain their personal information overtime.

Interpretation of Web Page Layouts by Blind Users (Short Paper)
Luis Francisco-Revilla and Jeff Crow
bstract. Digital libraries must support assistive technologies that allow people with disabilities such as blindness to use, navigate and understand their documents. Increasingly, many documents are Web-based and present their contents using complex layouts. However, approaches that translate 2-dimensional layouts to 1-dimensional speech produce a very different user experience and loss of information. To address this issue, we conducted a study of how blind people navigate and interpret layouts of news and shopping Web pages using current assistive technology. The study revealed that blind people do not parse Web pages fully during their first visit, and that they often miss important parts. The study also provided useful insights for improving assistive technologies.