教育学论文 旅游管理 社会学论文 广告学论文 MBA论文 会计学论文 媒体学 物流学论文 国际商务管理 金融学论文计算机网络
返回首页

留学生计算机专业论文定制:一种三维模型搜索

eNTERFACE’05
July 18- August 12, 2005 – Faculté Polytechnique de Mons, Belgium
A Multimodal (Gesture+Speech) Interface for 3D Model Searchand Retrieval Integrated in a Virtual Assembly ApplicationProject Title: A Multimodal (Gesture+Speech) Interface for 3D Model Search and Retrieval Integrated in a Virtual Assembly Application Principal investigator:
留学生论文Dr. Dimitrios Tzovaras (ITI-CERTH)
Candidates:Konstantinos Moustakas
Abstract:
The goal of the project is the development of a multimodal interface for content-basedsearch of 3D objects based on sketches. This user interface will integrate graphical,gesture and speech modalities to aid the user in sketching the outline of the 3D objecthe/she wants to search from a large database. Finally, the system will be integrated ina virtual assembly application, where the user will be able to assemble a machinefrom its spare parts using only speech and specific gestures.Project objective:Search and retrieval of 3D objects is a very challenging issue with application
http://ukthesiss.com/dissertation_sample/jisuanjiwangluo/branches in numerous areas like recognition in computer vision and mechanicalengineering, content-based search in e-commerce and edutainment applications etc.These application fields will expand in the near future, since the 3D model databasesgrow rapidly due to the improved scanning hardware and modeling software that havebeen recently developed.
The difficulties of expressing multimedia and especially three dimensional content viatext-based descriptors reduces the performance of the text-based search engines toretrieve the desired multimedia content efficiently and effectively. To resolve thisproblem, 3D content-based search and retrieval (S&R) has drawn a lot of attention inthe recent years. A typical S&R system evaluates the similarities between query andtarget objects according to low-level geometric features. However, the requirement ofa query model for searching by example often reduces the applicability of an S&R
platform, since in many cases the user knows what kind of object he wants to retrieve
but he does not have a 3D model to use as query.Imagine the following use case: The user of a virtual assembly application is trying toassemble an engine of its spare parts. He inserts some rigid parts into the virtual sceneand places them in the correct position. At one point he needs to find a piston andassemble it to the engine. In this case, he has to manually search in the database tofind the piston. It would be faster and much more easier if the user had the capabilityof sketching the outline of the piston using specific gestures combined with speech inorder to perform the search.In the context of this project the integration of speech and gestures in the S&Rplatform will be addressed. Speech commands are going to be used for selectingcategories of objects to be searched and for inquiring the automatic sketching ofsimple geometrical objects. The system will also use gesture information for



------分隔线----------------------------
UK Thesis Base Contacts

24小时在线客服

QQ:77276002

Email:77276002@qq.com

推荐内容