<p id="9z1vv"></p>

          <ruby id="9z1vv"></ruby>
            <ruby id="9z1vv"></ruby>

              <p id="9z1vv"></p>
              <ruby id="9z1vv"></ruby>
              <ruby id="9z1vv"></ruby>


                  ¡ò Call for Participation    ¡ò Keynote Speakers    ¡ò Schedule of Program    ¡ò Abstracts of Presentations for the Practitioner's Day
                  ¡¡

                  Call for Participation


                  ACM International Conference on Image and Video Retrieval (CIVR2010) will be held from July 5 to 7, 2010, in Xi'an China. CIVR2010 will bring together top researchers around the world to exchange research results and address open issues in all aspect of image and video retrieval. All the researchers who are interested in this topic are welcome to register and participate this conference. Also, you can take this opportunity to visit Shanghai Expo.

                  The conference will follow the CIVR tradition with single-track sessions. Two keynote speeches and one industrial presentations will be hosted. The conference also targets to include FIVE oral sessions, ONE poster sessions, and TWO special sessions. Practitioner activities are extremely important and we have selected chairs which have close connections with industry and have experience in organizing such activities. There is a best paper competition. The best paper recipients will be awarded at the banquet dinner.

                  Social events for CIVR 2010 will include a welcome reception and a banquet dinner. One of the events will be taken place at the Shaanxi Grand Opera House (Tang Show and Dumpling Banquet) , (see more information later).

                  Top

                  Keynote Speakers

                  Prof. Chua Tat-Seng, National University of Singapore  >>
                  ¡¡
                           Chua Tat-Seng is the KITHC Chair Professor at the School of Computing, National University of Singapore (NUS). He was the Acting and Founding Dean of the School of Computing during 1998-2000. He joined NUS in 1983, and spent three years as a research staff member at the Institute of Systems Science (now I2R) in the late 1980s. Dr Chua's main research interest is in multimedia information retrieval, in particular, on the analysis, retrieval and question-answering (QA) of text and image/video information. He is currently working on several multi-million-dollar projects: interactive media search, local contextual search, and real-time live media search. His group participates regularly in TREC-QA and TRECVID video retrieval evaluations.  Dr Chua has organized and served as program committee member of numerous international conferences in the areas of computer graphics, multimedia and text processing. He is the conference  co-chair of ACM Multimedia 2005, CIVR (Conference on Image and Video Retrieval) 2005, and ACM SIGIR 2008. He serves in the editorial boards of: ACM Transactions of Information Systems (ACM), Foundation
                  and Trends in Information Retrieval (NOW), The Visual Computer (Springer Verlag), and Multimedia Tools and Applications (Kluwer). He is the member of steering committee of CIVR, Computer Graphics International, and Multimedia Modeling conference series; and as member of International Review Panels of two large-scale research projects in Europe.


                  Keynote Speech Title:
                  Towards Web-Scale Media Content Analysis and Retrieval - What has University Research 
                                                             Contributed to Commercial Systems and Social Network Services

                  Speaker: Chua, Tat-Seng, School of Computing, NUS

                  Synopsis: With the exponential growth of media contents on the Web, the ability to search for media entities not just based on text annotations, but also visual contents, has become important. Although limited, commercial search engines, like Bing and Google image search, are now offering search services based on both text and visual contents. As commercial-scale search services require the handling of millions of media entities within interactive time, and with visibly improved performance beyond what can be done with annotated text, are research and lab technologies ready for such offerings? Has years of media content analysis research made any important contributions towards such services, and what should we focus on next to make better impact?
                  This talk identifies 3 research directions critical to the success of Web-scale media search ¨C namely visual concept annotation, indexing, and interactive search strategies. This talk also describes potential contributions and synergy between advanced media research and commercial offerings, and discusses future directions.


                  Prof. Kiyoharu Aizawa, The University of Tokyo  >>

                  ¡¡
                          Kiyoharu Aizawa, received the B.E., the M.E., and the Dr.Eng. degrees in Electrical Engineering all from the University of Tokyo, in 1983, 1985, 1988, respectively. He is currently a Professor at the Department of Information and Communication Engineering and Interfaculty Initiative of Information Studies of the University of Tokyo. He was a Visiting Assistant Professor at University of Illinois from 1990 to 1992. His research interests are in image processing and multimedia, and he is currently engaged in multimedia lifelog and three dimensional video.  He received the 1987 Young Engineer Award and the 1990, 1998 Best Paper Awards, the 1991 Achievement Award, 1999 Electronics Society Award from IEICE Japan, and the 1998 Fujio Frontier Award, the 2002 and 2009 Best Paper Award from ITE Japan. He received the IBM Japan Science Prize in 2002.   He is currently the Editor in Chief of Journal of ITE Japan, and an Associate Editor of IEEE Trans. Image Processing and is on Editorial Board of ACM TOMCCAP and Journal of
                  Visual Communications and Image Processing. He served as an Associate Editor of IEEE Trans. CSVT and IEEE Trans. Multimedia, too. He has served a number of international and domestic conferences; he was the General co-Chair of MMM2008 and SPIE VCIP99. Program Co-Chair of ACM CIVR2008 and Short Paper Track Chair of ACM 2005 etc. He is a Member of IEEE, ACM, IEICE, ITE.

                  Keynote Speech Title:
                  Life Log : Where are We Now, and Where Can we Go?

                  Speaker: Kiyoharu Aizawa, University of Tokyo, Interfaculty Initiative of Information Studies and
                                    Department of Information and Communication Engineering

                  Abstract: Capturing our activities in our daily life by electronic means leads to digitizing and archiving personal experiences. Making use of such "life log" data enables us to notice information that we usually tend to miss or forget in our daily life. Since recently, life log are getting increasing attention, and quite a few services related to life log are appearing. In this talk, the current status of life log technology is surveyed, and projects we have been investigating so far are introduced. Thoughts on a perspective on life log technology and applications is also addressed.

                  Keynote Speaker for the practitioner day 

                  Dr. Alejandro Jaimes, Yahoo! Research in Barcelona  >>

                  ¡¡

                          Alejandro (Alex) Jaimes is Senior Research Scientist at Yahoo! Research where he is leading new initiatives at the intersection of web-scale data analysis and user understanding (user engagement & improving user experience). Dr. Jaimes is the founder of the ACM Multimedia Interactive Art program, Industry Track chair for ACM RecSys 2010 and UMAP 2009, and panels chair for KDD 2009. He was program co-chair of ACM Multimedia 2008, co-editor of the IEEE Trans. on Multimedia Special issue on Integration of Context and Content for Multimedia Management (2008), and a founding member of the IEEE CS Taskforce on Human-Centered Computing. His work has led to over 60 technical publications in international conferences and journals, and to numerous contributions to MPEG-7. He has been granted several patents, and serves in the program committee of several international conferences. He has been an invited speaker at Practitioner Web Analytics 2010, ECML-PKDD 2010 and KDD 2009 and (Industry tracks), ACM Recommender Systems 2008 (panel), DAGM 2008 (keynote), 2007 ICCV Workshop on HCI, and several others. Before joining Yahoo!

                          Dr. Jaimes was a visiting professor at U. Carlos III in Madrid and founded and managed the User Modeling and Data Mining group at Telef¨®nica Research. Prior to that Dr. Jaimes was Scientific Manager at IDIAP-EPFL (Switzerland), and was previously at Fuji Xerox (Japan), IBM TJ Watson (USA), IBM Tokyo Research Laboratory (Japan), Siemens Corporate Research (USA), and AT&T Bell Laboratories (USA). Dr. Jaimes received a Ph.D. in Electrical Engineering (2003) and a M.S. in Computer Science from Columbia U. (1997) in NYC.

                  Keynote Speech Title: What can billions of queries tell us about image search? A Human-Centered perspective
                  Speaker: Alejandro Jaimes, Yahoo! Research in Barcelona

                  Abstract: In recent years significant progress has been made in developing techniques to automatically index images using both content and related text. In spite of this, there is generally little understanding of what people do when they interact with web-scale image search engines. The main assumption is that people search, but for the most part, what images they search for and why remain largely unknown. Large-scale query logs provide a very sparse picture of users' actions, but they can be a valuable resource for gaining insights into what people are doing, how they are doing it, and why they are doing it. In this presentation I will discuss strategies for query-log analysis, and present results on analyzing a very large data set of image query logs from a web-scale search engine. I will explain why a Human-Centered approach is required in analyzing and interpreting the data, giving user search strategy examples and highlighting the implications for algorithm and user interface design. Finally, I will discuss future directions and challenges based on what we can observe from real user actions, and describe how integrating multiple sources of data (e.g., demographics, context, etc.) can help fill in the gap to gain a better user understanding.

                  Top

                  ¡¡
                  Schedule of Program

                  ¡¡

                  JULY 4, 2010
                  Venue:
                  the lobby of Tang Cheng Hotel

                  09:00 - 22:00   Registration
                  Conference Service at Room 405 (Phone: 8405) of Tang Cheng Hotel

                  ¡¡

                  JULY 5, 2010
                  Venue:
                  Hua E Gong, the second floor of Tang Cheng Hotel

                  08:45 - 09:00   Opening

                  09:00 - 10:00   Keynote Speech
                  Session Chair:
                  Qi Tian, University of Texas at San Antonio
                  Title:
                  Towards Web-Scale Media Content Analysis and Retrieval - What has University Research  Contributed to
                            Commercial Systems and Social Network Services

                  Speaker:
                  Tat-Seng Chua, NTUS, Singapore

                  10:00 - 10:30   Coffee Break

                  10:30 - 12:00   Best Paper Candidates Session (3 Papers)
                  Session Chair: Selcuk Candan, Arizona State University, USA

                      (1). An Application of Compressive Sensing for Image Fusion
                             Tao Wan, Zengchang Qin, University of Bristol
                      (2). Unsupervised Multi-Feature Tag Relevance Learning for Social Image Retrieval
                             Xirong Li, Cees Snoek, Marcel Worring, University of Amsterdam
                      (3). Today's and Tomorrow's Retrieval Practice in the Audiovisual Archive
                             Bouke Huurnink, Cees Snoek, Maarten De Rijke, Arnold Smeulders, University of Amsterdam

                  12:00 - 13:15   Lunch
                  Venue: Western Restaurant, the second floor of Tang Cheng Hotel

                  13:15 - 14:15   Oral Session: Social Media and User Tags (I) (3 Papers)
                  Session chair:
                  Tat-Seng Chua, NTUS, Singapore

                      (1). Non-parametric Kernel Ranking Approach for Social Image Retrieval
                  ¡¡¡¡  Jinfeng Zhuang, Steven Hoi, School of Computer Engineering, Nanyang Technological University, Singapore
                      (2). Co-reranking by Mutual Reinforcement for Image Search
                  ¡¡¡¡   Ting Yao, Tao Mei, Chong-Wah Ngo, University of Science and Technology of China
                      (3). Learning to Rank Tags
                  ¡¡¡¡   Zheng Wang, Jiashi Feng, Changshui Zhang, Shuicheng Yan, Tsinghua University

                  14:15 - 14:30   Break

                  14:30 - 15:30   Oral Session: Social Media and User Tags (II) (3 Papers)
                  Session chair:
                  Alejandro Jaimes, Yahoo! Research, Spain

                      (1). On the Sampling of Web Images for Learning Visual Concept Classifiers
                             Shiai Zhu, Gang Wang, Chong-Wah Ngo, Yu-Gang Jiang, City University of Hong Kong
                      (2). The Accuracy and Value of Machine-Generated Image Tags: Design and User Evaluation of
                             an End-to-End Image Tagging System
                             Lexing Xie, Apostol Natsev, Matthew Hill, John Smith, Alex Phillips, IBM Watson Research Center, NY, USA
                      (3). Utilizing Related Samples to Learn Complex Queries in Interactive Concept-based Video Search
                            Jin Yuan, Zheng-jun Zha, Zhengdong Zhao, Xiangdong Zhou, Tat-Seng Chua, Nation university of Singapore

                  15:30 - 16:00   Coffee Break

                  16:00 - 17:20   Special session: Large-Scale Multimedia Mining (4 Papers)
                  Session Chair:
                  Hong Lu, Fudan university, China

                      (1). Exploring Large-Scale Data for Multimedia QA -- An Initial Study
                             Richang Hong, Guangda Li, Liqiang Nie, Jinhui Tang, Tat-Seng Chua, School of Computing, National University of Singapore
                      (2). Structured Max-Margin Learning for Multi-Label Image Annotation
                             Xiangyang Xue, Hangzai Luo, Jianping Fan, School of Computer Science, Fudan University
                      (3). Coherent Bag-of Audio Words Model for Efficient Large-Scale Video Copy Detection
                             Yang Liu, Wan-Lei Zhao, Chong-Wah Ngo, Chang-Sheng Xu, Han-Qing Lu, Institute of Automation, Chinese Academy of Sciences
                      (4). An Effective Method for Video Genre Classification
                             Jian-Feng Chen, Hong Lu, Renzhong Wei, Cheng Jin, Xiangyang Xue, School of Computer Science, Fudan University

                  18:00 - 20:00   Reception (including conferring the Best Paper Award and the promotion of ICMR2011)
                  Venue: Western Restaurant, the second floor of Tang Cheng Hotel

                  ¡¡

                  JULY 6, 2010
                  Venue:
                  Hua E Gong, the second floor of Tang Cheng Hotel

                  09:00 - 10:00   Keynote Speech
                  Session Chair:
                  Xinbo Gao, Xidian University, China
                  Title:  Life Log : Where Are We Now, and Where Can We Go?
                  Speaker:
                  Kiyoharu Aizawa, University of Tokyo, Japan

                  10:00 - 11:45   Coffee + Poster Session (37 Posters) 
                  (Poster Specification: Width*Height = 84cm*118.8cm or 33.11inches*46.82inches, i.e., A0 size)

                      (01). Multi-Label Learning by Image-to-Class Distance with Applications to Scene Classification and Image Annotation
                               Zhengxiang Wang, Yiqun Hu, Liang-Tien Chia, Nanyang Technological University, Singapore
                      (02).
                  TF-Tree: An Interactive and Efficient Retrieval of Chinese Calligraphic Manuscript Images Based On Triple Features
                               Yi Zhuang, Zhejiang Gongshang University, China
                      (03). Scalable Clip-based Near-Duplicate Video Detection with Ordinal Measure
                               Sakrapee Paisitkriangkrai, Tao Mei, Jian Zhang, Xian-Sheng Hua, The University of New South Wales, Sydney, Australia
                      (04).
                  The Effect of Baroque Music on the PassPoints Graphical Password
                               Haichang Gao, Zhongjie Ren, Xiuling Chang, Xiyang Liu, Uwe Aickelin, Xidian University
                      (05). Multiple-Instance Image Database Retrieval by Spatial Similarity Based on Interval Neighbor Group
                               John Y. Chiang, Shuenn-Ren Cheng, Yen-Ren Huang, National Sun Yat-sen University
                      (06). Consumer Image Retrieval by Estimating Relation Tree From Family Photo Collections
                               Tong Zhang, Hui Chao, Chris Willis, Dan Tretter, Hewlett-Packard Laboratories, USA
                      (07). Eigen-Space Learning Using Semi-supervised Diffusion Maps for Human Action Recognition
                               Feng Zheng, Ling Shao, Zhan Song, Shenzhen Institutes of Advanced Technology, CAS, Shenzhen, China
                      (08). A Multiobjective Immune Clustering Ensemble Technique Applied to Unsupervised SAR Image Segmentation
                               Ruochen Liu, Wei Zhang, Licheng Jiao , Fang Liu, Xidian University, China
                      (09). Dual-Ranking for Web Image Retrieval
                               Piji Li, Zhang Lei, Jun Ma, Shandong University, China
                      (10). Towards Multimodal Emotion Recognition: A New Approach
                               Marco Paleari, Benoit Huet, Ryad Chellali, TeleRobotics and Applications, Italian Institute of Technology, Genoa
                      (11). Asymmetric Semi-Supervised Boosting for SVM Active Learning in CBIR
                               Jun Wu, Zheng-Kui Lin, Ming-Yu Lu, Dalian Maritime University, China
                      (12). Interacting with Location-based Multimedia Using Sketches
                               Gamhewage de Silva, Kiyoharu Aizawa, The University of Tokyo, Japan
                      (13). Evaluating Detection of Near Duplicate Video Segments
                               Werner Bailer, Institute of Information Systems, JOANNEUM RESEARCH
                      (14). Latent Visual Context Analysis for Image Re-ranking
                               Wengang Zhou, Qi Tian, Linjun Yang, Houqiang Li, University of Science and Technology of China
                      (15). Music Video Affective Understanding Using Feature Importance Analysis
                              
                  Yue Cui,Jesse Jin, Shiliang Zhang, Suhuai Luo, Qi Tian, University of Newcastle
                      (16). Weighting Visual Features with Pseudo Relevance Feedback for CBIR
                               Jian Chen, Rui Ma, Zhong Su, IBM Research, China
                      (17). MI-SIFT: Mirror and Inversion Invariant Generalization for SIFT Descriptor
                               Rui Ma, Jian Chen, Zhong Su, IBM Research, China
                      (18). A Descriptor Combining MHI and PCOG for Human Motion Classification
                               Ling Shao, Ling Ji, Department of Electronic and Electrical Engineering, University of Sheffield, UK.
                      (19). Image Retrieval using Markov Random Fields and Global Image Features
                               Ainhoa Llorente Coto, R. Manmatha, Stefan R¨¹ger, Knowledge Media Institute, The Open University Milton Keynes
                      (20). Mixture Model based Contextual Image Retrieval
                              
                  Xing Xing, Yi Zhang, Bo Gong, School of Engineering, University of California Santa Cruz
                      (21). A Saliency Map Method with Cortex-like Mechanisms and Sparse Representation
                               Bing Han, Xinbo Gao, Vincent Walsh, Lili Tcheang, Xidian University, China
                      (22). Genre-specific Semantic Video Indexing
                               Jun Wu, Marcel Worring, University of Amsterdam
                      (23). Optimizing Visual Search with Implicit User Feedback in Interactive Video Retrieval
                               Stefanos Vrochidis, Ioannis Kompatsiaris, Ioannis Patras
                               Queen Mary University of London/Informatics and Telematics Institute Thermi, Greece

                      (24). Dayside Corona Aurora Classification Based on X-Gray Level Aura Matrices
                               Yuru Wang, Xinbo Gao, Yongjun Jian, Rong Fu, Xidian University, China
                      (25). Beyond Tag Relevance: Integrating Visual Attention Model and Multi-Instance Learning for Tag Saliency Ranking
                               Songhe Feng, Congyan Lang, De Xu, Beijing Jiaotong University
                      (26). Motion Data-Driven Model for Semantic Events Classification using an Optimized Support Vector Machine
                               Bashar Tahayna, Mohammed Belkhatir, Saadat Alhashmi, Thomas O' Daniel, Monash University
                      (27). The Effect of Semantic Relatedness Measures on Multi-label Classification Evaluation
                              
                  Stefanie Nowak, Ainhoa Llorente, Enrico Motta, Stefan R¨¹ger, Enrico Motta, Stefanie Nowak Fraunhofer IDMT
                      (28). A Ranking Method for Multimedia Recommenders
                               Massimiliano Albanese, Antonio d'Acierno, Vincenzo Moscato, Fabio Persia, Antonio Picariello, University of Maryland
                      (29). A Hybrid Unsupervised Image Re-ranking Approach with Latent Topic Contents
                               Zhang Lei, Piji Li, Jun Ma, Shandong University, China
                      (30). Plant Species Identification Using Leaf Image Retrieval
                               Carlos Caballero, M. Carmen Aranda, Universidad de M¨¢laga
                      (31). Video-Based Traffic Accident Analysis at Intersections Using Partial Vehicle Trajectories
                               Omer Akoz, M. Elif Karsligil, Yildiz Technical University
                      (32). Multi Modal Semantic Indexing For Image Retrieval
                              
                  Pulla Chandrika, C. V Jawahar, International Institute of Information Technology, India
                      (33). A Software Pipeline for 3D Animation Generation using Mocap Data and Commercial Shape Models
                               Xin Zhang, David Biswas, Guoliang Fan, Oklahoma State University
                      (34).
                  System Architecture of a Web Service for Content-Based Image Retrieval
                               Xavier Giro-i-Nieto, Carles Ventura, Jordi Pont-Tuset, Silvia Cortes, Ferran Marques, Technical University of Catalonia, Spain
                      (35).
                  NMF-based Multimodal Image Indexing for Querying by Visual Example
                               Fabio Gonzalez, Juan Caicedo, Olfa Nasraoui, Jaafar Ben-Abdallah, Natinal University of Colombia
                      (36). Hierarchical Feedback Algorithm Based on Visual Community Discovery for Interactive Video Retrieval
                               Lin Pang, Juan Cao, Yongdong Zhang, Shouxun Lin, ICT Chinese Academy of Science
                      (37). An Efficient Method for Face Retrieval from Large Video Datasets
                               Thao Ngoc Nguyen, Thanh Duc Ngo, Duy-Dinh Le, Shin'ichi Satoh, Bac Hoai Le, Duc Anh Duong
                               National Institute of Informatics, Japan

                  11:45 - 12:45   Oral Session: Context, Emotions, and Affects (3 papers)
                  Session Chair:
                  Yiannis Patras, Queen Mary, University of London, UK

                      (1). Affective Prediction in Photographic Images using Probabilistic Affective Model
                             Yunhee Shin, Eun Yi Kim, Konkuk University
                      (2). Emotion Related Structures in Large Image Databases
                            Martin Solli, Reiner Lenz, Linköping University, Sweden
                      (3). Contextual Image Retrieval Model
                             Linjun Yang, Bo Geng, Alan Hanjalic, Xian-Sheng Hua, University of Science and Technology of China

                  12:45 - 14:00   Lunch
                  Venue: Western Restaurant, the second floor of Tang Cheng Hotel


                  14:00 - 15:20 Oral Session: Content-Based Techniques (4 papers)
                  Session Chair:
                  Kiyo Aizawa, University of Tokyo, Japan

                      (1). Scale-Invariant Proximity Graph for Fast Probabilistic Object Recognition
                            
                  Jerome Revaud, Guillaume Lavou¨¦, Ariki Yasuo, Atilla Baskurt, Universit¨¦ de Lyon, CNRS, INSA-Lyon, LIRIS
                      (2). Affine Stable Characteristic Based Sample Expansion for Object Detection
                             Ke Gao, Yongdong Zhang, Wei Zhang, Shouxun Lin, Institute of Computing Technology, Chinese Academy of Sciences
                      (3). Relevant Shape Contour Snippet Extraction with Metadata Supported Hidden Markov Models
                             Xinxin Wang, K. Selcuk Candan, Arizona State University
                      (4). Signature Quadratic Form Distance
                             Christian Beecks, Merih Uysal, Thomas Seidl, RWTH Aachen University

                  15:20 - 15:45   Coffee Break

                  15:45 - 17:25   Special Session: Vision-Based Human Action Recognition and Retrieval (5 Papers)
                  Session Chair:
                  Ling Shao, University of Sheffield, UK

                      (1). Relative Margin Support Tensor Machines for Gait and Action Recognition
                             Irene Kotsia, Ioannis Patras, School of Electronic Engineering and Computer Science, Queen Mary University of London
                      (2). A Set of Co-occurrence Matrices on the Intrinsic Manifold of Human Silhouettes for Action Recognition
                             Feng Zheng, Ling Shao, Zhan Song, Shenzhen Institutes of Advanced Technology, CAS
                      (3). Video Scene Analysis of Interactions between Humans and Vehicles Using Event Context
                             M. S. Ryoo, Jong Taek Lee, J. K. Aggarwal, Robot Research Department ETRI Daejeon, Korea
                    
                   (4). Dynamic Textures for Human Movement Recognition
                             Vili Kellokumpu, Guoying Zhao, Matti Pietikainen, University of Oulu
                      (5). Feature Detector and Descriptor Evaluation in Human Action Recognition
                             Ling Shao, Riccardo Mattivi, Department of Electronic & Electrical Engineering, The University of Sheffield

                  18:00 - 21:00   Banquet with Tang Show
                  All participants must gather at the lobby of Tang Cheng Hotel at 18:00 for Tang Dynasty Palace by arranged transportation.

                  ¡¡

                  JULY 7, 2010, Practitioner's Day
                  Venue:
                  Hua E Gong, the second floor of Tang Cheng Hotel

                  9:00 - 10:00   Keynote Speech
                  Session chair:
                  Qi Tian, University of Texas at San Antonio, USA
                  Title:  What Can Billions of Queries Tell Us About Image Search? A Human-Centered Perspective
                  Speaker: 
                  Alejandro Jaimes, Yahoo! Research, Spain

                  10:00-10:30 
                   Coffee Break

                  Session 1: Asian Perspectives
                  Session Chair:
                  Alejandro Jaimes, Yahoo! Research, Spain

                  10:30 - 11:00   NExT - A Joint NUS-Tsinghua Center for Extreme Search
                       Tat-Seng Chua, National University of Singapore
                  11:00 - 11:30   How to Realize Content Analysis in Web-Scale Multimedia Search
                       Xian-Sheng Hua, Microsoft Research Asia
                  11:30 - 12:00   Multimedia Web Analysis Framework towards Development of Social Analysis Software
                       Masashi Toyoda, University of Tokyo
                  12:00 - 12:30   Technical Challenges for Premium Content Retrieval at Hulu.com
                       Zhibing Wang, Hulu

                  12:30 - 14:00   Lunch
                  Venue: Western Restaurant, the second floor of Tang Cheng Hotel)

                  Session 2: European Perspectives
                  Session Chair:
                  Yiannis Kompatsiaris, CERTH-ITI, Greece

                  14:00 - 14:20   Chorus+: Coordinated Approach to the EurOpean Effort on AUdio-visual Search Engines
                       Yiannis Kompatsiaris, CERTH-ITI, Greece
                  14:20 - 14:50  
                  PetaMedia: Multimedia Access in Social Peer-to-Peer Networks
                       Yiannis Patras, Queen Mary, University of London, UK
                  14:50 - 15:20   Geographic Context in Multimedia Mining: Yahoo! and Glocal
                       Alejandro Jaimes, Yahoo! Research, Spain
                  15:20 - 15:50   WeKnowIt: Making the Collective Intelligence of Social Media Searchable
                       Yiannis Kompatsiaris, CERTH-ITI, Greece

                  15:50 - 16:15 
                   Coffee Break

                  16:15 - 17:30   Panel

                  17:30 - 17:40   Closing
                  ¡¡

                  Top

                  Abstracts of Presentations of the Practitioner's Day


                  NExT - A Joint NUS-Tsinghua Center for Extreme Search
                  Tat-Seng Chua, National University of Singapore

                  Greater connectivity enabled by improved infrastructure and decreased cost of mobile and sensory gadgets has led to the evolution of Internet from a pure text medium to a mixture of media rich and "live" data. Existing solutions are inadequate to manage this ever growing wealth and quantity of data, especially live data. To address this problem, we plan to research into technologies for extreme search, which aims to search for data that is not indexed and searchable by the current Web. Such data includes millions of real-time data streams generated continuously from sensors, mobile devices, data sources such as forums and blogs etc located in around the world. In particular, extreme search aims to extract meanings from these data streams, and make the extracted information available for searching by users. The center named NExT will be a long-term multi-million dollar center setup to leverage on research expertise of NUS and Tsinghua into research collaboration on extreme search. This talk presents the plan and vision of this center.


                  How to Realize Content Analysis in Web-Scale Multimedia Search
                  Xiang-Sheng Hua, Microsoft Research Asia

                  Content-based multimedia search has been studied for decades, and recently regain much attention from both industry and academia in the context of handling Internet and Web scale data.

                  However, due to the difficulties in computation, storage, bandwidth, and responding speed, as well as the limitation of content analysis algorithms in handling large-scale and high-variance data, it is still difficult to realize real content-aware Internet multimedia search engines. In this talk, we will analyzing the challenges in productizing content analysis technologies in Web-scale data and discussing the possible shortcuts and way-outs to these challenges. We will show a couple of already-released content-aware
                  features in Microsoft Bing multimedia search engine and a few ongoing projects at Microsoft Research that potentially can be applied in Web-scale multimedia search.


                  Multimedia Web Analysis Framework towards Development of Social Analysis Software
                  Masashi Toyoda, University of Tokyo

                  Abstract:


                  Technical challenges for premium content retrieval at Hulu.com
                  Zhibing Wang, Hulu

                  Every day, millions of premium video contents are streamed via Hulu.com in the US. This talk uncovers the scenes behind the Hulu
                  website in terms of lessons learned and the technical challenges we are currently facing. Technologies in the field of premium video
                  search will be discussed in the context of multimedia research practices in the past decade. In addition, we will also discuss our
                  views on video recommendation and advertisement.

                  Finding the most entertaining content for a variety of users, either actively or passively, is not an easy task. Different from archive
                  video search and User Generated Content search, users interested in premium content search are more passive; therefore this requires a different set of technical tools. Archive video footages are usually used at studios by experts to produce new premium videos and thus the search requires very detailed content analysis to enrich the video content description. User generated content often lacks necessary metadata or description, thus content analysis is nearly the only viable choice. For professional content, specific content analysis tools may be of usage sometimes, but collective user behavior is proven to be another useful perspective. Leveraging community power to tags within video help users to understand content better. In addition, social tags provide contextual tools of search to drill-down into content.

                  In design of the ranking algorithm, factors like video quality and genre also play important roles beside the relatively rich metadata.
                  Apart from algorithms, easy-to-use interface is of equal, if not of more importance in developing real applications. Thus a set of
                  different user interfaces, which are designed to help users find attractive contents in different scenarios, will be presented to
                  illustrate this perspective. We will also provide our insight about the future research problems.


                  Chorus+: Coordinated Approach to the EurOpean Effort on AUdio-visual Search Engines
                  Yiannis Kompatsiaris, CERTH-ITI, Greece

                  Abstract:


                  PetaMedia: Multimedia Access in Social Peer-to-Peer Networks
                  Yiannis Patras, Queen Mary, University of London, UK

                  While the web can be increasingly regarded as a multimedia web, easy comfortable access remains limited to text-based content. On the other hand, a wealth of user-contributed and implicit information is available in social networking, communities, and other forms of explicit or implicit collaboration. The Petamedia NoE sees the future of multimedia dissemination and consumption with systems with distributed architectures and, in particular, P2P systems, and is pushing new paradigms in enabling efficient and effective access to multimedia content in such network structures. The paradigm is based in the synergetic combination of user-based collaborative tagging, peer-to-peer networks and multimedia content analysis. Within this context, the process of assigning tags to content should and will take other, far more implicit, forms than the current ¡°user-types-word-for-a-picture¡±. In this talk, we will emphasize on the implicit, user-centred approaches for obtaining in unobtrusive ways semantic annotation for multimedia content. In particular, we will present our work on the multimodal analysis of neurological (EEG) and physiological (e.g. heart rate) reactions of the user to the presentation of music videos. We show promising results in placing the presented videos in the arousal-valence diagram and discuss the potential applications for annotation and retrieval.


                  Geographic Context in Multimedia Mining: Yahoo! and Glocal
                  Alejandro Jaimes, Yahoo! Research

                  GLOCAL is a European integrated project whose aim is to organize media around events. Our personal media collections are organized around personal events, such as weddings, holiday celebrations, the death of a loved one, or the birth of a baby. These are events that we all experience, and although our experience of them may be unique, they have a common structure, and a common set of attributes that can be extracted and exploited to aid in the indexing and search of media. From common experience in the aggregate, we can extract iconic events, around which we can organize media and data. Similarly, from global events, such as the World Cup, and its associated media and metadata, we can choose which aspects to present to the user to provide the most relevant results in their personal search context. Thus events become the locus around which we organize and search media, on both a local and global level.

                  At Yahoo! Research, we are concerned mostly with the geographic nature of events. To this end we have a number of research initiatives to discover the geographic intent of a user, the geographic scope of media, and to leverage vast amounts of user-generated content, to understand how users interpret their personal geographies in their every day lives. In this talk we present an overview of Glocal, and the ongoing work at Yahoo! Research in geographic context.


                  WeKnowIt: Making the Collective Intelligence of Social Media Searchable
                  Yiannis Kompatsiaris, CERTH-ITI

                  As more and more people participate in social web sites and contribute user-generated content (UGC), these sites apart
                  from content collections, provide a rich knowledge source, also known as Collective Intelligence. Further, the fact that
                  users annotate and comment on the content on a daily basis, gives this data source an extremely dynamic nature that reflects
                  the changes and the evolution of community focus. Although current Web 2.0 applications allow and are based on annotations
                  and feedback by the users, these are not sufficient for extracting this "hidden" knowledge and allowing efficient search in social media. This is due to the lack clear semantics resulting from limitations such as polysemy, lack of uniformity, and spam. Within the WeKnowIt project, scalable approaches are being developed able to handle the mass amount of available data and generate an optimized 'Intelligence' layer that enables the exploitation of the knowledge hidden in the user contributed content. The talk will emphasize on community detection techniques for clustering social media and on travel related applications.


                  Top

                  Copyright @ Xidian University CIVR 2010 All rights reserved
                  Website designed and hosted by SEE-VIPSL

                    <p id="9z1vv"></p>

                          <ruby id="9z1vv"></ruby>
                            <ruby id="9z1vv"></ruby>

                              <p id="9z1vv"></p>
                              <ruby id="9z1vv"></ruby>
                              <ruby id="9z1vv"></ruby>

                                  ÒÁÈ˾þÃ×ÛºÏ