一区二区三区日韩精品-日韩经典一区二区三区-五月激情综合丁香婷婷-欧美精品中文字幕专区

分享

Interview with Qi Pan about his Webcam 3D scanner proForma

 nikybook 2014-01-29

Qi Pan, a Cambridge University researcher has developed proForma. ProForma is a tool that turns any normal webcam into a 3D scanner. I got to interview him and talk about his product, which if it worked could one of  the holy grails of mass customization. It would enable anyone to inexpensively turn things digital and then reproduce them.

Joris Peels: So how did you get started on this project?


Qi Pan: At the start of my PhD, I was interested in real-time 3D modeling of
outdoor scenes. However, several months in, I realised that current
processing power wasn't enough to model outdoor scenes well (due to
occlusions, lack of texture, etc). Therefore I turned my attention to
smaller objects, which would stand a better chance on current
hardware. With smaller objects though, they would always be sitting in
an environment, which you wouldn't want to model, which led me to the
idea of using a fixed camera and separating the object using motion.
All of the design choices made in the system were then tailored
towards making everything as fast as possible, whilst still producing
a reasonable output.

How long did it take?
The project as it stands has taken around a year and a half to
develop, although not all of that time was spent on development (time
was also spent on publications and attending various conferences).

What was hard to do?
The hardest thing to do was to combine all of the system components
into a real-time system. The problem with real-time is that if any one
part of the system is not working well, your system just doesn't work
full stop. Therefore you need to make sure all parts are well
optimised and producing the right output at the right time for the
other components. When designing each component, the utmost care had
to be taken to ensure that we were doing things as efficiently as
possible, using the best available algorithms (or inventing our own if
none existed).

How does it work exactly?
The system works in two stages.
The first stage is a tracker, and uses the partial 3D model we've
constructed to work out the position and orientation of the object
relative to the camera. This stage also tracks the position of
interest points (areas of high contrast change) in the images
frame-to-frame. After a significant enough motion is detected, a
key-frame plus the interest point tracks are passed to the
reconstruction stage. Only interest points on the object are tracked
as there is a mathematical constraint on the motion of points on a
rigid object (based on Epipolar geometry).
The reconstruction stage takes these feature tracks and triangulates
3D positions in order to form a cloud of points. This is then meshed
using a 3D Delaunay tetrahedralisation. This however merely partitions
the convex hull of the points into tetrahedra, so therefore we need to
employ a carving algorithm to remove incorrect tetrahedra from
concavities in the object. We formulated a very efficient
probabilistic carving algorithm to achieve this, which allows us to
obtain the surface of the object based on the interest points we've
seen in each keyframe.
This method requires a partial 3D model to track from, which isn't
available right at the start of reconstruction (but is later).
Therefore, our initialisation step differs slightly from normal
operation. We assume that at least part of the object falls within a
large circle at the centre of the image. We track interest points
inside this circle, and use rigid body motion constraints to ascertain
the orientation and position of the object relative to the camera.
Amazingly, this is possible, even if we have no idea about the 3D
positions of the interest points we are tracking! The system then
works as above once we have this initial orientation and position.

But, can I take a thing and then you will give me a mesh?

Yes, as long as it is textured enough! The system is based on interest
points, so the object must have enough areas of high contrast change.

What are some of the limitations?
This system is of course only a first step in generic object
reconstruction, and as such has a few limitations. One limitation is
the inability to model objects or parts of objects without enough
texture. This is something we are working on - we are seeking to
combine other cues to complement our interest point based approach.
This approach can  in theory be applied to modeling entire scenes,
but then we come up against the problems of the environment not being
textured enough in areas, occlusion and needing more processing power.
The technique as it stands can only be used to model rigid objects due
to the rigid body assumption being used for segmentation.


You will be working more on it in the future?
Yes - we most certainly will! This project is more of a proof of
concept and just the tip of the iceberg in terms of what we can
achieve.

Will there be a tool that people can download?
Yes - we're currently working on releasing one soon.

When?
I'm currently porting the software to the newest libraries (which
unfortunately means reimplementing lots of stuff from scratch) but in
a few months time we aim to release a linux-based demo which will
hopefully be followed by a windows based demo after that.

    本站是提供個人知識管理的網(wǎng)絡存儲空間,所有內(nèi)容均由用戶發(fā)布,不代表本站觀點。請注意甄別內(nèi)容中的聯(lián)系方式、誘導購買等信息,謹防詐騙。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點擊一鍵舉報。
    轉(zhuǎn)藏 分享 獻花(0

    0條評論

    發(fā)表

    請遵守用戶 評論公約

    類似文章 更多

    东京热男人的天堂久久综合| 欧美一本在线免费观看| 久久精品国产熟女精品| 久久一区内射污污内射亚洲| 国产精品一级香蕉一区| 欧美日韩精品久久亚洲区熟妇人 | 国产精品国产亚洲看不卡| 国产欧美性成人精品午夜| 99热九九在线中文字幕| 色老汉在线视频免费亚欧| 午夜成年人黄片免费观看| 日本亚洲欧美男人的天堂| 久久99夜色精品噜噜亚洲av| 黄片在线免费看日韩欧美| 欧美日韩国产亚洲三级理论片| 空之色水之色在线播放| 美日韩一区二区精品系列| 真实国产乱子伦对白视频不卡| 日韩精品视频免费观看| 中文精品人妻一区二区| 中文字幕日韩欧美一区| 久久天堂夜夜一本婷婷| 欧美欧美欧美欧美一区| 少妇高潮呻吟浪语91| 午夜福利大片亚洲一区| 99热在线播放免费观看| av一区二区三区天堂| 69老司机精品视频在线观看| 日韩精品视频高清在线观看| 黄片免费观看一区二区| 好吊妞视频这里有精品| 色鬼综合久久鬼色88| 最近日韩在线免费黄片| 亚洲中文字幕人妻av| 国产成人精品在线播放| 亚洲视频在线观看免费中文字幕 | 日韩欧美中文字幕人妻| 91国内视频一区二区三区| 免费在线成人激情视频| 欧美日韩亚洲国产精品| 在线观看视频成人午夜|