Image understanding from web media resource

dc.contributor.advisorTian, Qi
dc.contributor.authorXiao, Jie
dc.contributor.committeeMemberRobbins, Kay
dc.contributor.committeeMemberZhang, Weining
dc.contributor.committeeMemberBylander, Tom
dc.contributor.committeeMemberHuang, Yufei
dc.date.accessioned2024-03-08T17:34:12Z
dc.date.available2024-03-08T17:34:12Z
dc.date.issued2013
dc.descriptionThis item is available only to currently enrolled UTSA students, faculty or staff. To download, navigate to Log In in the top right-hand corner of this screen, then select Log in with my UTSA ID.
dc.description.abstractImage understanding has drawn extensive attention from academic community and industry field. It becomes a cornerstone for numerous image-related real world applications in the era of billions of online images. Image understanding can be solved in two aspects: given a topic, how to yield a group of relevant images related to its semantic meaning; given an image, how to generate a group of relevant labels related to its visual content. In this dissertation, I present two techniques to handle these two aspects by leveraging web media resources: web images from image search engines and social images from online social network. And we further employ Wikipedia, the wisdom of crowd, as an auxiliary online resource to minimize the semantic gap between high level semantic meaning and the low level image feature representation. To solve the first problem of yielding relevant images for a given topic, I built a one-class classifier for the massive negative images from image search engines and predicted the irrelative images and re-rank images by visual relevance. To solve the second problem of yielding relevant labels for a given image, I exploit the images from social network which providing more label information. I introduce a new concept "visual group" and embedded it into a framework to study the tag-tag relevance and tag-image relevance. The experiments demonstrate the effectiveness and efficiency of our approach compared with the state-of-the-art tag ranking algorithms. For the images from social network, we conduct some research on the meta-data. More specifically, we focus on the images' tags and geographical information. To fully explore the geographical information, we narrow down our focus to the images related to large cities. We employ Wikipedia knowledge and images' geographical information to present images which can best describe a given sub-topic under a Wikipedia entry. A prototype system was built. My results on solving image understanding tasks can be directly applied in numeral real-world applications including image search re-ranking, social network image tag re-ranking, ranking social network images by tags, image recommendation via geographical and Wikipedia knowledge.
dc.description.departmentComputer Science
dc.format.extent110 pages
dc.format.mimetypeapplication/pdf
dc.identifier.isbn9781303393051
dc.identifier.urihttps://hdl.handle.net/20.500.12588/6029
dc.languageen
dc.subjectimage ranking
dc.subjectimage understanding
dc.subjectsocial network
dc.subjecttag ranking
dc.subject.classificationComputer science
dc.titleImage understanding from web media resource
dc.typeThesis
dc.type.dcmiText
dcterms.accessRightspq_closed
thesis.degree.departmentComputer Science
thesis.degree.grantorUniversity of Texas at San Antonio
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Xiao_utsa_1283D_11121.pdf
Size:
37.1 MB
Format:
Adobe Portable Document Format