CA2615406A1 - System and method for obtaining a live performance in a video game or movie involving massively 3d digitized human face and object - Google Patents
System and method for obtaining a live performance in a video game or movie involving massively 3d digitized human face and object Download PDFInfo
- Publication number
- CA2615406A1 CA2615406A1 CA002615406A CA2615406A CA2615406A1 CA 2615406 A1 CA2615406 A1 CA 2615406A1 CA 002615406 A CA002615406 A CA 002615406A CA 2615406 A CA2615406 A CA 2615406A CA 2615406 A1 CA2615406 A1 CA 2615406A1
- Authority
- CA
- Canada
- Prior art keywords
- human face
- massively
- video game
- live performance
- obtaining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
- A63F13/655—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/69—Involving elements of the real world in the game world, e.g. measurement in live races, real video
- A63F2300/695—Imported photos, e.g. of the player
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/16—Indexing scheme for image data processing or generation, in general involving adaptation to the client's capabilities
Abstract
This invention consists of a whole process to massively bring a digitized 3D human face in the form of an avatar along with 3D objects into an immersive environment for video games and movies. It covers a concept to bring video gamers and spectators to a real time live performance in video games or movies that are produced by using both traditional filming techniques and computer animation.
Description
SYSTEM AND METHOD FOR OBTAINING A LIVE PERFORMANCE IN
HUMAN FACE AND OBJECT
FIELrI OF THE INVENTI(lN
The present invention generally relates to computer animation. More particularly, it relates to color non-contact optical 3D digitizers, avatars, immersive environments, and live performances in video games and movies.
BACKGROUND OF THE INVENTION
3D computer animation became a common tool in creation of video games and movies postproduction in the last few years. A texture mapped computer model of almost any object can be created and animated by using commercially available software applications. A number of input devices, such as 3D scanner, 3D digitizer and motion capture systems are used to help to achieve a highly realistic animation. With the dramatic increase in computing power of easily available computers, high quality computer animation can now be produced by using an affordable computer.
On the other hand, the creation of a highly realistic human model is still carried out by professional computer modelers and animators because of the technical complexity of such production. Some techniques have been developed to create an avatar of a real human being. But in general, these techniques map simply a 2D image (photo) onto a generic model and the graphic quality of the avatar cannot match the quality of the computer animation in video games and movies.
More and more immersive media environments are created to involve gamers and spectators in a new show experience. The gamers and
HUMAN FACE AND OBJECT
FIELrI OF THE INVENTI(lN
The present invention generally relates to computer animation. More particularly, it relates to color non-contact optical 3D digitizers, avatars, immersive environments, and live performances in video games and movies.
BACKGROUND OF THE INVENTION
3D computer animation became a common tool in creation of video games and movies postproduction in the last few years. A texture mapped computer model of almost any object can be created and animated by using commercially available software applications. A number of input devices, such as 3D scanner, 3D digitizer and motion capture systems are used to help to achieve a highly realistic animation. With the dramatic increase in computing power of easily available computers, high quality computer animation can now be produced by using an affordable computer.
On the other hand, the creation of a highly realistic human model is still carried out by professional computer modelers and animators because of the technical complexity of such production. Some techniques have been developed to create an avatar of a real human being. But in general, these techniques map simply a 2D image (photo) onto a generic model and the graphic quality of the avatar cannot match the quality of the computer animation in video games and movies.
More and more immersive media environments are created to involve gamers and spectators in a new show experience. The gamers and
2 spectators become part of the performance although the integration of the live performers into a computer animation is still a very time consuming process and with very limited functions.
Recently, a new device, an automatic 3D Photo Crystal Booth, has begun to be introduced to the consumer market. Some of the booths are already implanted in theme parks, family entertainment centers and shopping malls. This 3D photo crystal booth can capture a 3D model of a human portrait by using a real 3D digitizer and engraves the 3D portrait in a glass cube. So far, the 3D model of a human face captured in this automatic booth is not ready to be animated because its mesh is not structured based on facial features. In addition, the captured 3D data are still stored in a local computer although most of these booths have an internet connection for remote maintenance purposes.
SUMMARY OF THE INVENTION
Statement of the Objects of the Invention:
This invention consists of a whole process to massively bring a digitized 3D human face in the form of an avatar along with 3D objects into an immersive environment for video games and movies. It covers a concept to bring video gamers and spectators to a real time live performance in video games or movies that are produced by using both traditional filming techniques and computer animation.
Summary of the Invention The first part of this invention consists of obtaining a virtual 3D model of some objects (i.e. in computer format) that represents some part of the human body (usually, the face). These virtual 3D models (including among other information the 3D geometry and color overlay information they
Recently, a new device, an automatic 3D Photo Crystal Booth, has begun to be introduced to the consumer market. Some of the booths are already implanted in theme parks, family entertainment centers and shopping malls. This 3D photo crystal booth can capture a 3D model of a human portrait by using a real 3D digitizer and engraves the 3D portrait in a glass cube. So far, the 3D model of a human face captured in this automatic booth is not ready to be animated because its mesh is not structured based on facial features. In addition, the captured 3D data are still stored in a local computer although most of these booths have an internet connection for remote maintenance purposes.
SUMMARY OF THE INVENTION
Statement of the Objects of the Invention:
This invention consists of a whole process to massively bring a digitized 3D human face in the form of an avatar along with 3D objects into an immersive environment for video games and movies. It covers a concept to bring video gamers and spectators to a real time live performance in video games or movies that are produced by using both traditional filming techniques and computer animation.
Summary of the Invention The first part of this invention consists of obtaining a virtual 3D model of some objects (i.e. in computer format) that represents some part of the human body (usually, the face). These virtual 3D models (including among other information the 3D geometry and color overlay information they
3 possess) are acquired by using a 3D optical imaging device. This device can be integrated in a fully automatic 3D Photo Crystal Booth or it can be an automatic, semi-automatic or manually operated stand-alone 3D
digitizing system. The captured 3D model will be saved in a commonly accepted computer format. In order to make these data accessible from any future locations where a live performance of a video game or a movie will take place, it is important to archive these captured 3D data in a centralized server through an internet connection already available at the 3D photo Crystal Booth or the data processing center.
The second part of this invention is a process to convert the captured 3D
model in a model form that can be animated. A software tool will help to perform the conversion, which might consist of morphing of a generic pre-arranged mesh structure to a captured 3D model and mapping the captured color texture to a pre-arranged UV mapped texture, or some other process that can achieve a similar result. This process needs to be done in a very easy way so that an untrained operator can perform it within a very short time.
Once the model is ready to be animated, it will be brought to a pre-animated scene where an avatar will be integrated in a video game or a movie either by real time rendering (controlled by an interaction of the gamer) or a pre-rendered process.
A non-restrictive description of a preferred embodiment of the invention will now be given with reference to the appended drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a graphic representation of the 3D capture of a human face using a stand-alone 3D digitizing device;
Fig. 2; is a graphic representation of the 3D capture of a human face in a fully automatic 3D photo crystal booth;
digitizing system. The captured 3D model will be saved in a commonly accepted computer format. In order to make these data accessible from any future locations where a live performance of a video game or a movie will take place, it is important to archive these captured 3D data in a centralized server through an internet connection already available at the 3D photo Crystal Booth or the data processing center.
The second part of this invention is a process to convert the captured 3D
model in a model form that can be animated. A software tool will help to perform the conversion, which might consist of morphing of a generic pre-arranged mesh structure to a captured 3D model and mapping the captured color texture to a pre-arranged UV mapped texture, or some other process that can achieve a similar result. This process needs to be done in a very easy way so that an untrained operator can perform it within a very short time.
Once the model is ready to be animated, it will be brought to a pre-animated scene where an avatar will be integrated in a video game or a movie either by real time rendering (controlled by an interaction of the gamer) or a pre-rendered process.
A non-restrictive description of a preferred embodiment of the invention will now be given with reference to the appended drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a graphic representation of the 3D capture of a human face using a stand-alone 3D digitizing device;
Fig. 2; is a graphic representation of the 3D capture of a human face in a fully automatic 3D photo crystal booth;
4 Fig. 3 illustrates a typical conversion of the captured model into an avatar that can be animated; and Fig. 4; is a diagram showing the process to bring a gamer or a spectator in~~ ~n imm~rcii~ nrmri+rr ~nnr~i-.r~~
iiiw aii nliiiicioivc ciivliviriimi~iciii DESi.KiP T iON OF PREFERRED EMEODiMEN T S
The process of involving the gamer or the spectator into a performance in a video game or a movie consists of the following elements and steps:
First, the face, or another part of the body or an object, of a video gamer or a spectator needs to be captured by a 3D imaging device prior to the performance or, at least, prior to the moment in the performance when the computer processed 3D model is deemed necessary. The 3D capture can be carried out in a fully automatic 3D photo crystal booth as shown in fig.1, or can be completed by using a stand-alone 3D digitizing device (fig.2), as desired by, or depending on the method available to, the gamer or spectator.
The captured 3D data are either saved in a local computer or archived into a centralized server through an internet connection. These data can be retrieved some time later at a location where an internet connection is available, through a process that will depend, among other things but not restricted to, on the type of connection, the location of the server or the type of data storage.
A 3D model of the acquired object, be it the face, another body part or an object, is then created from the captured 3D raw data by a computer process designed for this purpose, and is then saved. This 3D model is converted to a model that can be animated by applying a pre-arranged and pre-mapped generic model, as shown in Fig3. The software used for this processing can be a standalone software, a web based software, or part of an existing software compatible with the data format and capable of carrying out the remapping.
Finully the ?rl mndel thut hueheem m~ulifieuJ in ho eacy fn unimu4o i~
iiiw aii nliiiicioivc ciivliviriimi~iciii DESi.KiP T iON OF PREFERRED EMEODiMEN T S
The process of involving the gamer or the spectator into a performance in a video game or a movie consists of the following elements and steps:
First, the face, or another part of the body or an object, of a video gamer or a spectator needs to be captured by a 3D imaging device prior to the performance or, at least, prior to the moment in the performance when the computer processed 3D model is deemed necessary. The 3D capture can be carried out in a fully automatic 3D photo crystal booth as shown in fig.1, or can be completed by using a stand-alone 3D digitizing device (fig.2), as desired by, or depending on the method available to, the gamer or spectator.
The captured 3D data are either saved in a local computer or archived into a centralized server through an internet connection. These data can be retrieved some time later at a location where an internet connection is available, through a process that will depend, among other things but not restricted to, on the type of connection, the location of the server or the type of data storage.
A 3D model of the acquired object, be it the face, another body part or an object, is then created from the captured 3D raw data by a computer process designed for this purpose, and is then saved. This 3D model is converted to a model that can be animated by applying a pre-arranged and pre-mapped generic model, as shown in Fig3. The software used for this processing can be a standalone software, a web based software, or part of an existing software compatible with the data format and capable of carrying out the remapping.
Finully the ?rl mndel thut hueheem m~ulifieuJ in ho eacy fn unimu4o i~
5 integrated in the desired computer animation scenes, which could consist, among other things but not limited to, in a video game or a performance.
Tha animatinn cranac ;yhirh rnntain tha a;iatar r)f tha %;iclen gamer or spectator could be rendered, preferably but not necessarily, in real-time with the gamer or the spectator interaction, or pre-rendered to show a performance which includes animation of the avatar.
Fig. 4 illustrates the whole process.
Although the present invention has been explained hereinabove by way of a preferred embodiment thereof, it should be pointed out that any modifications to this preferred embodiment is not deemed to alter or change the nature and scope of the present invention.
Tha animatinn cranac ;yhirh rnntain tha a;iatar r)f tha %;iclen gamer or spectator could be rendered, preferably but not necessarily, in real-time with the gamer or the spectator interaction, or pre-rendered to show a performance which includes animation of the avatar.
Fig. 4 illustrates the whole process.
Although the present invention has been explained hereinabove by way of a preferred embodiment thereof, it should be pointed out that any modifications to this preferred embodiment is not deemed to alter or change the nature and scope of the present invention.
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002615406A CA2615406A1 (en) | 2007-12-19 | 2007-12-19 | System and method for obtaining a live performance in a video game or movie involving massively 3d digitized human face and object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002615406A CA2615406A1 (en) | 2007-12-19 | 2007-12-19 | System and method for obtaining a live performance in a video game or movie involving massively 3d digitized human face and object |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2615406A1 true CA2615406A1 (en) | 2009-06-19 |
Family
ID=40792437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002615406A Abandoned CA2615406A1 (en) | 2007-12-19 | 2007-12-19 | System and method for obtaining a live performance in a video game or movie involving massively 3d digitized human face and object |
Country Status (1)
Country | Link |
---|---|
CA (1) | CA2615406A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8823642B2 (en) | 2011-07-04 | 2014-09-02 | 3Divi Company | Methods and systems for controlling devices using gestures and related 3D sensor |
-
2007
- 2007-12-19 CA CA002615406A patent/CA2615406A1/en not_active Abandoned
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8823642B2 (en) | 2011-07-04 | 2014-09-02 | 3Divi Company | Methods and systems for controlling devices using gestures and related 3D sensor |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10297087B2 (en) | Methods and systems for generating a merged reality scene based on a virtual object and on a real-world object represented from different vantage points in different video data streams | |
Bolter et al. | Reality media: Augmented and virtual reality | |
Manovich | Image future | |
JP4783588B2 (en) | Interactive viewpoint video system and process | |
JP7128217B2 (en) | Video generation method and apparatus | |
US20100201693A1 (en) | System and method for audience participation event with digital avatars | |
CN104376589A (en) | Method for replacing movie and TV play figures | |
US10282900B2 (en) | Systems and methods for projecting planar and 3D images through water or liquid onto a surface | |
CN106331521A (en) | Film and television production system based on combination of network virtual reality and real shooting | |
Zioulis et al. | 3D tele-immersion platform for interactive immersive experiences between remote users | |
US20230006826A1 (en) | System and method for generating a pepper's ghost artifice in a virtual three-dimensional environment | |
US20200082598A1 (en) | Methods and Systems for Representing a Scene By Combining Perspective and Orthographic Projections | |
WO2009068942A1 (en) | Method and system for processing of images | |
Sayyad et al. | Panotrace: interactive 3d modeling of surround-view panoramic images in virtual reality | |
CA2615406A1 (en) | System and method for obtaining a live performance in a video game or movie involving massively 3d digitized human face and object | |
Manovich | Image future | |
Kuchelmeister et al. | Affect and place representation in immersive media: The Parragirls Past, Present project | |
CN115859440A (en) | Design method of Yuan universe mobile museum | |
Husinsky et al. | Virtual stage: Interactive puppeteering in mixed reality | |
Feiler et al. | Archiving the Memory of the Holocaust | |
Gomide | Motion capture and performance | |
Zong et al. | Transformation of Film Directing and Cinematography through Technological Advancements: Focusing on Ang Lee's Films | |
KR20190089450A (en) | Real-time computing method, device and 3d human character for hologram show | |
US11769299B1 (en) | Systems and methods for capturing, transporting, and reproducing three-dimensional simulations as interactive volumetric displays | |
CN213426345U (en) | Digital sand table interactive item exhibition device based on oblique photography |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Discontinued |