US20130010071A1 - Methods and systems for mapping pointing device on depth map - Google Patents

Methods and systems for mapping pointing device on depth map Download PDF

Info

Publication number
US20130010071A1
US20130010071A1 US13/541,684 US201213541684A US2013010071A1 US 20130010071 A1 US20130010071 A1 US 20130010071A1 US 201213541684 A US201213541684 A US 201213541684A US 2013010071 A1 US2013010071 A1 US 2013010071A1
Authority
US
United States
Prior art keywords
pointing device
user
handheld pointing
depth map
motion data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/541,684
Inventor
Andrey Valik
Pavel Zaitsev
Dmitry Morozov
Alexander Argutin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
3DIVI
Original Assignee
3DIVI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 3DIVI filed Critical 3DIVI
Assigned to 3DIVI reassignment 3DIVI ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ARGUTIN, ALEXANDER, MOROZOV, DMITRY, VALIK, ANDREY, ZAITSEV, PAVEL
Publication of US20130010071A1 publication Critical patent/US20130010071A1/en
Priority to PCT/RU2013/000188 priority Critical patent/WO2013176574A1/en
Priority to US13/855,743 priority patent/US20140009384A1/en
Assigned to 3DIVI COMPANY reassignment 3DIVI COMPANY CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE FROM 3DIVI TO 3DIVI COMPANY PREVIOUSLY RECORDED ON REEL 028500 FRAME 0443. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF PATENT APPLICATION NO. 13541684. Assignors: ARGUTIN, ALEXANDER, MOROZOV, DMITRY, VALIK, ANDREY, ZAITSEV, PAVEL
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • This disclosure relates generally to human-computer interfaces and, more particularly, to the technology for determining a location of a handheld pointing device, such as a remoter controller for a game console, on a depth map generated by a gesture recognition control system.
  • gesture recognition technology which enables the users to interact with the computer naturally, using body language, without any mechanical devices.
  • the users can make inputs or generate commands using gestures or motions made by hands, arms, fingers, legs, and so forth. For example, using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the pointer will move accordingly.
  • gesture recognition control systems also known as motion sensing input systems
  • a depth sensing camera which captures scene images in real time
  • a computing unit which interprets captured scene images so as to generate various commands based on identification of user gestures.
  • the gesture recognition control systems have very limited computation resources and also small resolution of the depth sensing camera so it is difficult to identify and track motions of relatively small objects such as handheld pointing devices.
  • the pointing devices may play an important role for human-computer interaction, especially, for gaming software applications.
  • the pointing devices may refer to controller wands or remote control devices enabling the users to generate specific commands by pressing dedicated buttons arranged thereon or by making predetermined gestures.
  • the computer can be controlled via the gesture recognition technology (i.e., by processing data related to motion and location of the handheld pointing devices) and also receipt of specific commands originated by pressing dedicated buttons.
  • the gesture recognition control systems when enabled, monitor and track all gestures performed by users with the help of handheld pointing devices.
  • a high resolution depth sensing camera and immoderate computational resources are used to enable the gesture recognition control systems to identify and track a motion of a relatively small handheld pointing device.
  • the present day gesture recognition control systems lack sufficient accuracy or generate unwanted latency when there are tracked gestures performed by relatively small handheld pointing devices.
  • the handheld pointing devices may include specific auxiliary devices, such as a lighting sphere, to facilitate their identification and tracking. Either one of these approaches is disadvantageous and increases costs of the gesture recognition control systems.
  • the present disclosure refers to gesture recognition control systems configured to identify various user gestures and generate corresponding control commands. More specifically, the technology disclosed herein can determine and track a current location of a handheld pointing device based upon comparison of user gestures captured by a depth sensing camera and motion data of a handheld pointing device acquired by a communication module. The present technology allows determining a current position of a handheld pointing device on a depth map using typical computational resources and without a necessity to use dedicated auxiliary devices such as a lighting sphere.
  • the gesture recognition control system includes a depth sensing camera, which is used for generation of a depth map, and also a computing unit configured to process the depth map in real time to identify a user, user gestures, one or more user body parts, a user skeleton, motion data associated with user gestures, orientation data associated with user gestures, generate one or more commands associated with the identified gestures, and so forth.
  • the gesture recognition control system further includes a communication module which may receive motion data of a handheld pointing device and optionally orientation data of the handheld pointing device.
  • the gesture recognition control system assigns a current location (coordinates) of the user hand to the handheld pointing device so that its exact location is determined and can be further tracked.
  • the gesture recognition control system may be operatively coupled to or integrated with a computer, display, game console, and so forth. Accordingly, the determined and tracked location of the handheld pointing device may be used to control a display screen, game, or any other software application running on the computer.
  • the present disclosure discloses various methods for determining and tracking a current location of handheld pointing device in real time and also corresponding systems that can be used to implement these methods.
  • a simplified summary of one or more aspects regarding these methods in order to provide a basic understanding of such aspects as a prelude to the more detailed description that is presented later.
  • An example method may comprise: determining one or more motions of one or more user hands on a depth map, generating motion data associated with the one or more motions of the one or more user hands, acquiring motion data of a handheld pointing device, determining that the motion of the handheld pointing device is associated with the one or more motions of the one or more user hands, and determining a current position of the handheld pointing device on the depth map.
  • the method may further comprise generating the depth map by capturing a series of images.
  • the method may further comprise generating a virtual skeleton of the user.
  • the virtual skeleton may comprise at least one virtual limb of the user.
  • the method may further comprise determining coordinates of the one or more user hands, wherein the coordinates are associated with the virtual skeleton, and generating motion data of the one or more user hands.
  • the determination that the motion of the handheld pointing device is associated with the one or more motions of the one or more user hands can comprise comparing motion data of the one or more user hands and motion data of the handheld pointing device.
  • the method may further comprise determining which hand is holding the handheld pointing device.
  • the method may further comprise selectively assigning the coordinates of the hand holding the handheld pointing device to the handheld pointing device.
  • the method may further comprise determining an orientation of the handheld pointing device based upon the coordinates of various virtual skeleton joints related to the hand holding the handheld pointing device.
  • the method may further comprise generating a vector associated with the orientation of the handheld pointing device.
  • the handheld pointing device can be selected from a group comprising: a cellular phone, a smart phone, a remote controller, a video game console, a handheld game console, a computer, and a tablet computer.
  • the motion data associated with a motion of the handheld pointing device can comprise one or more of acceleration data, velocity data, and inertial data.
  • the method may further comprise acquiring orientation data of the handheld pointing device.
  • the orientation data can be generated by one or more orientations sensors of the handheld pointing device.
  • the orientation data may comprise one or more of the following: a pitch angle, a roll angle, and a yaw angle.
  • the method may further comprise determining that the handheld pointing device is in active use by the user.
  • the handheld pointing device is in active use by the user when the handheld pointing device is held and moved by the user and when the user is identified on the depth map.
  • the method may further comprise identifying the user on the depth map.
  • the method may further comprise tracking motions of the one or more user hands.
  • FIG. 1 shows an example system environment for providing a real time human-computer interface.
  • FIG. 2 is a general illustration of scene suitable for controlling an electronic device by recognition of gestures made by a user.
  • FIG. 3A shows a simplified view of an exemplary virtual skeleton associated with a user.
  • FIG. 3B shows a simplified view of an exemplary virtual skeleton associated with a user holding a handheld pointing device.
  • FIG. 4 shows an environment suitable for implementing methods for determining a position of a handheld pointing device.
  • FIG. 5 shows a simplified diagram of a handheld pointing device, according to an example embodiment.
  • FIG. 6 is a process flow diagram showing a method for determining a position of the handheld pointing device, according to an example embodiment.
  • FIG. 7 is a diagrammatic representation of an example machine in the form of a computer system within which a set of instructions for the machine to perform any one or more of the methodologies discussed herein is executed.
  • the techniques of the embodiments disclosed herein may be implemented using a variety of technologies.
  • the methods described herein may be implemented in software executing on a computer system or in hardware utilizing either a combination of microprocessors or other specially designed application-specific integrated circuits (ASICs), programmable logic devices, or various combinations thereof.
  • the methods described herein may be implemented by a series of computer-executable instructions residing on a storage medium such as a disk drive, or computer-readable medium.
  • the embodiments described herein relate to computer-implemented methods for determining and tracking a current location of a handheld pointing device.
  • one or more depth sensing cameras can be used to generate a depth map of a physical scene.
  • the depth map analysis and interpretation can be performed by a computing unit operatively coupled to or embedding the depth sensing camera.
  • Some examples of computing units may include a desktop computer, laptop computer, tablet computer, gaming console, audio system, video system, cellular phone, smart phone, personal digital assistant (PDA), set-top box (STB), television set, smart television system, or any other wired or wireless electronic device.
  • the computing unit may include or be operatively coupled to a communication unit which may communicate with various handheld pointing devices and, in particular, receive motion data of handheld pointing devices.
  • handheld pointing device refers to an input device or any other suitable remote controlling device which can be used for making an input.
  • Some examples of handheld pointing devices include a remote controller, cellular phone, smart phone, video game console, handheld game console, computer (e.g., a tablet computer), and so forth.
  • it may include various motion detectors, such as acceleration sensors, gyroscopes, or other detectors configured to measure velocity, momentum, and acceleration such as pitch, roll, and yaw (in other words, acceleration for X, Y, and Z movement in Cartesian axes), and/or orientation sensors to generate orientation data including pitch angles, roll angles, and yaw angles.
  • the handheld pointing device determines motion data, which includes velocities and/or acceleration levels, and transmits it to the computing unit over a wired or wireless network.
  • the computing unit interprets the depth map such that it may identify the user, generate a corresponding virtual skeleton of the user, which skeleton includes multiple “joints” and “bones,” and determine that the user made a gesture using his hands or arms.
  • the coordinates of every joint can be determined by the computing unit, and thus every user hand/arm motion can be tracked, and corresponding motion data can be generated, which may include a velocity, acceleration, orientation, and so forth.
  • the computing unit compares motion data associated with the user's hand/arm gesture and motion data (and optionally orientation data) associated with movement of the handheld pointing device. When both of these motion data coincide or correspond to each other, the computing unit determines that the handheld pointing device is held by a corresponding arm or hand of the user. Since coordinates of the user's arm/hand are known and tracked, the same coordinates are then assigned to the handheld pointing device. Therefore, the handheld pointing device can be tied to the virtual skeleton of the user so that the current location of the handheld pointing device can be determined and further monitored. In other words, the handheld pointing device is mapped on the depth map.
  • movements of the handheld pointing device may be further tracked in real time to identify particular user gestures causing the computing unit to generate corresponding control commands.
  • This approach can be used in various gaming and simulation/teaching software without a necessity to use immoderate computational resources, high resolution depth sensing cameras, or auxiliary devices (e.g., a lighting sphere) attached to the handheld pointing device to facilitate its identification on the depth map.
  • auxiliary devices e.g., a lighting sphere
  • FIG. 1 shows an example system environment 100 for providing a real time human-computer interface.
  • the system environment 100 includes a gesture recognition control system 110 , a display device 120 , and an entertainment system 130 .
  • the gesture recognition control system 110 is configured to capture various user gestures and user inputs, interpret them, and generate corresponding control commands, which are further transmitted to the entertainment system 130 . Once the entertainment system 130 receives commands generated by the gesture recognition control system 110 , the entertainment system performs certain actions depending on which software application is running. For example, the user may control a pointer on the display screen by making certain gestures.
  • the entertainment system 130 may refer to any electronic device such as a computer (e.g., a laptop computer, desktop computer, tablet computer, workstation, server), game console, television (TV) set, TV adapter, STB, smart television system, audio system, video system, cellular phone, smart phone, PDA, and so forth.
  • a computer e.g., a laptop computer, desktop computer, tablet computer, workstation, server
  • game console television (TV) set
  • TV adapter TV adapter
  • STB smart television system
  • audio system audio system
  • video system cellular phone
  • smart phone cellular phone
  • PDA smart phone
  • FIG. 2 is a general illustration of a scene 200 suitable for controlling an electronic device by recognition of gestures made by a user.
  • this figure shows a user 210 interacting with the gesture recognition control system 110 with the help of a handheld pointing device 220 .
  • the gesture recognition control system 110 may include a depth sensing camera, a computing unit, and a communication unit, which can be stand-alone devices or embedded within a single housing (as shown).
  • a depth sensing camera e.g., a depth sensing camera
  • computing unit e.g., a computing unit
  • communication unit e.g., a communication unit
  • the user and a corresponding environment, such as a living room are located, at least in part, within the field of view of the depth sensing camera.
  • the gesture recognition control system 110 may be configured to capture a depth map of the scene in real time and further process the depth map to identify the user, determine one or more user gestures, determine one or more user body parts, and generate corresponding control commands.
  • the gesture recognition control system 110 may also determine specific motion data associated with user gestures, wherein the motion data may include coordinates of the user's hands or arms, and velocity and acceleration of the user's hands/arms.
  • the gesture recognition control system 110 may generate a virtual skeleton of the user as shown in FIG. 3 and described below in greater details.
  • the handheld pointing device 220 may refer to a controller wand, remote control device (e.g., a gaming console remote controller), smart phone, cellular phone, PDA, tablet computer, or any other electronic device enabling the user 210 to generate specific commands by pressing dedicated buttons arranged thereon.
  • the handheld pointing device 220 is configured to determine its velocity, acceleration and/or orientation within the space with the help of embedded acceleration sensors, gyroscopes, or other motion sensors and/or orientation sensors.
  • the velocity, acceleration and/or orientation data can be transmitted to the gesture recognition control system 110 over a wireless or wire network.
  • a communication module which is configured to receive motion data (and optionally orientation data) associated with movements of handheld pointing device 220 , may be embedded in the gesture recognition control system 110 .
  • the gesture recognition control system 110 is also configured to determine the location of handheld pointing device 220 on the depth map by matching motion data associated with the gestures of one or more user's arms captured by the depth sensing camera and motion data (and optionally the orientation data) associated with movements of handheld pointing device 220 as received by the communication module. When the motions match each other, the gesture recognition control system 110 acknowledges that the handheld pointing device 220 is held in a particular hand of the user and then assigns coordinates of the user's hand to the handheld pointing device 220 . In various embodiments, this technology can be used for determining that the handheld pointing device 220 is in “active use,” which means that the handheld pointing device 220 is held by the user 210 who is located in the sensitive area of the depth sensing camera.
  • FIG. 3A shows a simplified view of an exemplary virtual skeleton 300 as can be generated by the gesture recognition control system 110 based upon the depth map.
  • the virtual skeleton 300 comprises a plurality of “bones” and “joints” 310 interconnecting the bones.
  • the bones and joints in combination, represent the user 210 in real time so that every motion of the user's limbs is represented by corresponding motions of the bones and joints.
  • each of the joints 310 may be associated with certain coordinates in a three-dimensional (3D) space defining its exact location.
  • any motion of the user's limbs such as an arm, may be interpreted by a plurality of coordinates or coordinate vectors related to the corresponding joint(s) 310 .
  • motion data can be generated for every limb movement. This motion data may include exact coordinates per period of time, velocity, direction, acceleration, orientation, and so forth.
  • FIG. 3B shows a simplified view of exemplary virtual skeleton 300 associated with the user 210 holding the handheld pointing device 220 .
  • the gesture recognition control system 110 determines that the user 210 holds and the handheld pointing device 220 and then determines the location (coordinates) of the handheld pointing device 220 , a corresponding mark or label can be generated on the virtual skeleton 300 .
  • the gesture recognition control system 110 can determine an orientation of the handheld pointing device 220 by analyzing the virtual skeleton 300 and/or by acquiring orientation data from the handheld pointing device 220 .
  • the orientation of handheld pointing device 220 may be represented as a vector 320 as shown in FIG. 3B .
  • FIG. 4 shows an environment 400 suitable for implementing methods for determining a position of a handheld pointing device 220 .
  • the gesture recognition control system 110 which may comprise at least one depth sensing camera 410 configured to capture a depth map.
  • depth map refers to an image or image channel that contains information relating to the distance of the surfaces of scene objects from a depth sensing camera.
  • the depth sensing camera 410 may include an infrared (IR) projector to generate modulated light, and also an IR camera to capture 3D images.
  • IR infrared
  • the gesture recognition control system 110 may optionally comprise a color video camera 420 to capture a series of 2D images in addition to 3D imagery created by the depth sensing camera 410 .
  • the series of 2D images captured by the color video camera 420 may be used to facilitate identification of the user on the depth map and/or various gestures of the user.
  • the depth sensing camera 410 and the color video camera 420 can be either stand alone devices or be encased within a single housing.
  • the gesture recognition control system 110 may also comprise a computing unit 430 for processing depth data and generating control commands for one or more electronic devices 460 (e.g., the entertainment system 130 ).
  • the computing unit 430 is also configured to implement steps of methods for determining a position of the handheld pointing device 220 as described herein.
  • the gesture recognition control system 110 also includes a communication module 440 configured to communicate with the handheld pointing device 220 and one or more electronic devices 460 . More specifically, the communication module 440 is configured to receive motion data and orientation data from the handheld pointing device 220 and transmit control commands to one or more electronic devices 460 .
  • the gesture recognition control system 110 may also include a bus 450 interconnecting the depth sensing camera 410 , color video camera 420 , computing unit 430 , and communication module 440 .
  • the aforementioned one or more electronic devices 460 can refer, in general, to any electronic device configured to trigger one or more predefined actions upon receipt of a certain control command.
  • Some examples of electronic devices 460 include, but are not limited to, computers (e.g., laptop computers, tablet computers), displays, audio systems, video systems, gaming consoles, entertainment systems, lighting devices, cellular phones, smart phones, TVs, and so forth.
  • the communication between the communication module 440 and the handheld pointing device 220 and/or one or more electronic devices 460 can be performed via a network (not shown).
  • the network can be a wireless or wired network, or a combination thereof.
  • the network may include the Internet, local intranet, PAN (Personal Area Network), LAN (Local Area Network), WAN (Wide Area Network), MAN (Metropolitan Area Network), virtual private network (VPN), storage area network (SAN), frame relay connection, Advanced Intelligent Network (AIN) connection, synchronous optical network (SONET) connection, digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, Ethernet connection, ISDN (Integrated Services Digital Network) line, dial-up port such as a V.90, V.34 or V.34bis analog modem connection, cable modem, ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface) or CDDI (Copper Distributed Data Interface) connection.
  • PAN
  • communications may also include links to any of a variety of wireless networks including WAP (Wireless Application Protocol), GPRS (General Packet Radio Service), GSM (Global System for Mobile Communication), CDMA (Code Division Multiple Access) or TDMA (Time Division Multiple Access), cellular phone networks, Global Positioning System (GPS), CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network.
  • WAP Wireless Application Protocol
  • GPRS General Packet Radio Service
  • GSM Global System for Mobile Communication
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • cellular phone networks Global Positioning System (GPS)
  • GPS Global Positioning System
  • CDPD cellular digital packet data
  • RIM Research in Motion, Limited
  • Bluetooth radio or an IEEE 802.11-based radio frequency network.
  • the network can further include or interface with any one or more of the following: RS-232 serial connection, IEEE-1394 (Firewire) connection, Fiber Channel connection, IrDA (infrared) port, SCSI (Small Computer Systems Interface) connection, USB (Universal Serial Bus) connection, or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking.
  • RS-232 serial connection IEEE-1394 (Firewire) connection, Fiber Channel connection, IrDA (infrared) port, SCSI (Small Computer Systems Interface) connection, USB (Universal Serial Bus) connection, or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking.
  • FIG. 5 shows a simplified diagram of the handheld pointing device 220 , according to an example embodiment.
  • the handheld pointing device 220 comprises one or more motion sensors 510 , one or more orientation sensors 520 and also a communication module 530 .
  • the handheld pointing device 220 may include additional modules (not shown), such as an input module, a computing module, a display, or any other modules, depending on the type of the handheld pointing device 220 .
  • the motion sensors 510 and orientation sensors 520 may include gyroscopes, acceleration sensors, velocity sensors, and so forth.
  • the motion sensors 510 are configured to determine motion data which may include a velocity, momentum, and acceleration such as pitch, roll, and yaw (in other words, acceleration for X, Y, and Z movement in Cartesian axes) of the handheld pointing device 220 .
  • the orientation sensors 520 may determine a relative orientation of the handheld pointing device 220 .
  • the orientation sensors 520 may be configured to generate orientation data including one or more of the following: pitch angle, roll angle, and yaw angle related to the handheld pointing device 220 .
  • motion data and optionally orientation data are then transmitted to the gesture recognition control system 110 with the help of communication module 520 .
  • the motion data and orientation data can be transmitted via the network as described above.
  • FIG. 6 is a process flow diagram showing a method 600 for determining a position of the handheld pointing device 220 , according to an example embodiment.
  • the method 600 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both.
  • the processing logic resides at the gesture recognition control system 110 .
  • the method 600 can be performed by the units/devices discussed above with reference to FIG. 4 .
  • Each of these units or devices can comprise processing logic.
  • examples of the foregoing units/devices may be virtual, and instructions said to be executed by a unit/device may in fact be retrieved and executed by a processor.
  • the foregoing units/devices may also include memory cards, servers, and/or computer discs. Although various modules may be configured to perform some or all of the various steps described herein, fewer or more units may be provided and still fall within the scope of example embodiments.
  • the method 600 may commence at operation 610 , with the depth sensing camera 410 generating a depth map by capturing a plurality of depth values of the scene in real time.
  • the depth map can be analyzed by the computing unit 430 to identify the user 210 on the depth map.
  • the computing unit 430 segments the depth data of the user 210 so as to generate a virtual skeleton of the user 210 .
  • the computing unit 430 determines coordinates of at least one user's hand (user's arm or user's limb).
  • the coordinates of the at least one user's hand can be associated with the virtual skeleton as discussed above.
  • the computing unit 430 determines a motion of the at least one user's hand by processing a plurality of depth maps.
  • the computing unit 430 generates motion data of the at least one user's hand.
  • the computing unit 430 acquires motion data and optionally orientation data of the handheld electronic device 220 via the communication module 440 .
  • the computing unit 430 compares the motion data (and optionally orientation data) of handheld electronic device 220 as acquired at operation 670 and the motion data of the at least one user's hand as generated at operation 660 . If the motion data of handheld electronic device 220 correspond (or match or are relatively similar) to the motion data of the user's hand, the computing unit 430 selectively assigns the coordinates of the user's hand to the handheld pointing device 220 at operation 690 . Thus, the location of handheld pointing device 220 is determined on the depth map. Further, the location of handheld pointing device 220 can be tracked in real time so that various gestures can be interpreted for generation of corresponding control commands for one or more electronic devices 460 .
  • the described technology can be used for determining that the handheld pointing device 220 is in active use by the user 210 .
  • active use means that the user 210 is identified on the depth map (see operation 620 ) or, in other words, is located within the viewing area of depth sensing camera 410 when the handheld pointing device 220 is moved.
  • the method 600 may further include operations (not shown) when the computing unit 430 generates a vector defining the current orientation of the handheld pointing device 220 .
  • the orientation of handheld pointing device 220 may be represented as the vector 320 (see FIG. 3B ).
  • the computing unit 430 generates the vector 320 by processing the orientation data of the handheld electronic device 220 as acquired at operation 670 by transforming the orientation data tied to an axis system of the handheld electronic device 220 to orientation data tied to an axis system of the gesture recognition control system 110 .
  • the vector coordinates are calculated for the axis system associated with the gesture recognition control system 110 based upon the vector coordinates in the axis system of the handheld electronic device 220 .
  • FIG. 7 shows a diagrammatic representation of a computing device for a machine in the example electronic form of a computer system 700 , within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed.
  • the machine operates as a standalone device, or can be connected (e.g., networked) to other machines.
  • the machine can operate in the capacity of a server, a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine can be a personal computer (PC), tablet PC, STB, PDA, cellular telephone, portable music player (e.g., a portable hard drive audio device, such as a Moving Picture Experts Group Audio Layer 3 (MP3) player), web appliance, network router, switch, bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • portable music player e.g., a portable hard drive audio device, such as a Moving Picture Experts Group Audio Layer 3 (MP3) player
  • web appliance e.g., a portable hard drive audio device, such as a Moving Picture Experts Group Audio Layer 3 (MP3) player
  • MP3 Moving Picture Experts Group Audio Layer 3
  • the example computer system 700 includes a processor or multiple processors 702 (e.g., a central processing unit (CPU), graphics processing unit (GPU), or both), main memory 704 and static memory 706 , which communicate with each other via a bus 708 .
  • the computer system 700 can further include a video display unit 710 (e.g., a liquid crystal display (LCD) or cathode ray tube (CRT)).
  • the computer system 700 also includes at least one input device 712 , such as an alphanumeric input device (e.g., a keyboard), pointer control device (e.g., a mouse), microphone, digital camera, video camera, and so forth.
  • the computer system 700 also includes a disk drive unit 714 , signal generation device 716 (e.g., a speaker), and network interface device 718 .
  • the disk drive unit 714 includes a computer-readable medium 720 that stores one or more sets of instructions and data structures (e.g., instructions 722 ) embodying or utilized by any one or more of the methodologies or functions described herein.
  • the instructions 722 can also reside, completely or at least partially, within the main memory 704 and/or within the processors 702 during execution by the computer system 700 .
  • the main memory 704 and the processors 702 also constitute machine-readable media.
  • the instructions 722 can further be transmitted or received over the network 724 via the network interface device 718 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP), CAN, Serial, and Modbus).
  • HTTP Hyper Text Transfer Protocol
  • CAN Serial
  • Modbus any one of a number of well-known transfer protocols
  • While the computer-readable medium 720 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine, and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions.
  • the term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like.
  • the example embodiments described herein may be implemented in an operating environment comprising computer-executable instructions (e.g., software) installed on a computer, in hardware, or in a combination of software and hardware.
  • the computer-executable instructions may be written in a computer programming language or may be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions may be executed on a variety of hardware platforms and for interfaces associated with a variety of operating systems.
  • computer software programs for implementing the present method may be written in any number of suitable programming languages such as, for example, C, C++, C#, Cobol, Eiffel, Haskell, Visual Basic, Java, JavaScript, Python, or other compilers, assemblers, interpreters, or other computer languages or platforms.

Abstract

Disclosed are methods for determining and tracking a current location of a handheld pointing device, such as a remote control for an entertainment system, on a depth map generated by a gesture recognition control system. The methods disclosed herein enable identifying a user's hand gesture, and generating corresponding motion data. Further, the handheld pointing device may send motion, such as acceleration or velocity, and/or orientation data such as pitch, roll, and yaw angles. The motion data of user's hand gesture and motion data (orientation data) as received from the handheld pointing device are then compared, and if they correspond to each other, it is determined that the handheld pointing device is in active use by the user as it is held by a particular hand. Accordingly, a location of the handheld pointing device on the depth map can be determined.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is Continuation-in-Part of Russian Patent Application Serial No. 2011127116, filed on Jul. 4, 2011, which is incorporated herein by reference in its entirety for all purposes.
  • TECHNICAL FIELD
  • This disclosure relates generally to human-computer interfaces and, more particularly, to the technology for determining a location of a handheld pointing device, such as a remoter controller for a game console, on a depth map generated by a gesture recognition control system.
  • BACKGROUND
  • The approaches described in this section could be pursued, but are not necessarily approaches that have previously been conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
  • Technologies associated with human-computer interaction have evolved over the last several decades. There are currently many various input devices and associated interfaces to enable computer users to control and provide data to their computers. Keyboards, pointing devices, joysticks, and touchscreens are just some examples of input devices that can be used to interact with various software products. One of the rapidly growing technologies in this field is the gesture recognition technology which enables the users to interact with the computer naturally, using body language, without any mechanical devices. In particular, the users can make inputs or generate commands using gestures or motions made by hands, arms, fingers, legs, and so forth. For example, using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the pointer will move accordingly.
  • There currently exist various gesture recognition control systems (also known as motion sensing input systems) which, generally speaking, include a depth sensing camera, which captures scene images in real time, and a computing unit, which interprets captured scene images so as to generate various commands based on identification of user gestures. Typically, the gesture recognition control systems have very limited computation resources and also small resolution of the depth sensing camera so it is difficult to identify and track motions of relatively small objects such as handheld pointing devices.
  • Various handheld pointing devices may play an important role for human-computer interaction, especially, for gaming software applications. The pointing devices may refer to controller wands or remote control devices enabling the users to generate specific commands by pressing dedicated buttons arranged thereon or by making predetermined gestures. Accordingly, the computer can be controlled via the gesture recognition technology (i.e., by processing data related to motion and location of the handheld pointing devices) and also receipt of specific commands originated by pressing dedicated buttons.
  • Typically, the gesture recognition control systems, when enabled, monitor and track all gestures performed by users with the help of handheld pointing devices. However, to enable the gesture recognition control systems to identify and track a motion of a relatively small handheld pointing device, a high resolution depth sensing camera and immoderate computational resources are used. Moreover, the present day gesture recognition control systems lack sufficient accuracy or generate unwanted latency when there are tracked gestures performed by relatively small handheld pointing devices. Alternatively, the handheld pointing devices may include specific auxiliary devices, such as a lighting sphere, to facilitate their identification and tracking. Either one of these approaches is disadvantageous and increases costs of the gesture recognition control systems. In view of the foregoing, there is still a need for improvements of gesture recognition control systems that will enhance interaction effectiveness and reduce required computational resources.
  • SUMMARY
  • This summary is provided to introduce a selection of concepts in a simplified form that are further described in the Detailed Description below. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
  • The present disclosure refers to gesture recognition control systems configured to identify various user gestures and generate corresponding control commands. More specifically, the technology disclosed herein can determine and track a current location of a handheld pointing device based upon comparison of user gestures captured by a depth sensing camera and motion data of a handheld pointing device acquired by a communication module. The present technology allows determining a current position of a handheld pointing device on a depth map using typical computational resources and without a necessity to use dedicated auxiliary devices such as a lighting sphere.
  • The gesture recognition control system includes a depth sensing camera, which is used for generation of a depth map, and also a computing unit configured to process the depth map in real time to identify a user, user gestures, one or more user body parts, a user skeleton, motion data associated with user gestures, orientation data associated with user gestures, generate one or more commands associated with the identified gestures, and so forth. The gesture recognition control system further includes a communication module which may receive motion data of a handheld pointing device and optionally orientation data of the handheld pointing device. Once motion data (and optionally the orientation data) of the handheld pointing device and motion data associated with a user gesture, such as a gesture of a user's arm, correspond to each other, the gesture recognition control system assigns a current location (coordinates) of the user hand to the handheld pointing device so that its exact location is determined and can be further tracked.
  • The gesture recognition control system may be operatively coupled to or integrated with a computer, display, game console, and so forth. Accordingly, the determined and tracked location of the handheld pointing device may be used to control a display screen, game, or any other software application running on the computer.
  • Thus, the present disclosure discloses various methods for determining and tracking a current location of handheld pointing device in real time and also corresponding systems that can be used to implement these methods. Below is provided a simplified summary of one or more aspects regarding these methods in order to provide a basic understanding of such aspects as a prelude to the more detailed description that is presented later.
  • According to an aspect, there is provided a method for determining a position of a handheld pointing device. An example method may comprise: determining one or more motions of one or more user hands on a depth map, generating motion data associated with the one or more motions of the one or more user hands, acquiring motion data of a handheld pointing device, determining that the motion of the handheld pointing device is associated with the one or more motions of the one or more user hands, and determining a current position of the handheld pointing device on the depth map.
  • According to various embodiments, the method may further comprise generating the depth map by capturing a series of images. The method may further comprise generating a virtual skeleton of the user. The virtual skeleton may comprise at least one virtual limb of the user. The method may further comprise determining coordinates of the one or more user hands, wherein the coordinates are associated with the virtual skeleton, and generating motion data of the one or more user hands. The determination that the motion of the handheld pointing device is associated with the one or more motions of the one or more user hands can comprise comparing motion data of the one or more user hands and motion data of the handheld pointing device.
  • According to further embodiments, the method may further comprise determining which hand is holding the handheld pointing device. The method may further comprise selectively assigning the coordinates of the hand holding the handheld pointing device to the handheld pointing device. The method may further comprise determining an orientation of the handheld pointing device based upon the coordinates of various virtual skeleton joints related to the hand holding the handheld pointing device.
  • According to further embodiments, the method may further comprise generating a vector associated with the orientation of the handheld pointing device. The handheld pointing device can be selected from a group comprising: a cellular phone, a smart phone, a remote controller, a video game console, a handheld game console, a computer, and a tablet computer. The motion data associated with a motion of the handheld pointing device can comprise one or more of acceleration data, velocity data, and inertial data.
  • According to further embodiments, the method may further comprise acquiring orientation data of the handheld pointing device. The orientation data can be generated by one or more orientations sensors of the handheld pointing device. The orientation data may comprise one or more of the following: a pitch angle, a roll angle, and a yaw angle.
  • According to further embodiments, the method may further comprise determining that the handheld pointing device is in active use by the user. The handheld pointing device is in active use by the user when the handheld pointing device is held and moved by the user and when the user is identified on the depth map.
  • According to yet another embodiment, the method may further comprise identifying the user on the depth map. The method may further comprise tracking motions of the one or more user hands.
  • In further examples, the above methods steps are stored on a nontransitory machine-readable medium comprising instructions, which perform the steps when implemented by one or more processors. In yet further examples, subsystems or devices can be adapted to perform the recited steps. Other features, examples, and embodiments are described below.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments are illustrated by way of example, and not by limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
  • FIG. 1 shows an example system environment for providing a real time human-computer interface.
  • FIG. 2 is a general illustration of scene suitable for controlling an electronic device by recognition of gestures made by a user.
  • FIG. 3A shows a simplified view of an exemplary virtual skeleton associated with a user.
  • FIG. 3B shows a simplified view of an exemplary virtual skeleton associated with a user holding a handheld pointing device.
  • FIG. 4 shows an environment suitable for implementing methods for determining a position of a handheld pointing device.
  • FIG. 5 shows a simplified diagram of a handheld pointing device, according to an example embodiment.
  • FIG. 6 is a process flow diagram showing a method for determining a position of the handheld pointing device, according to an example embodiment.
  • FIG. 7 is a diagrammatic representation of an example machine in the form of a computer system within which a set of instructions for the machine to perform any one or more of the methodologies discussed herein is executed.
  • DETAILED DESCRIPTION
  • The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with example embodiments. These example embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical, and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is therefore not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents. In this document, the terms “a” and “an” are used, as is common in patent documents, to include one or more than one. In this document, the term “or” is used to refer to a nonexclusive “or,” such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated.
  • The techniques of the embodiments disclosed herein may be implemented using a variety of technologies. For example, the methods described herein may be implemented in software executing on a computer system or in hardware utilizing either a combination of microprocessors or other specially designed application-specific integrated circuits (ASICs), programmable logic devices, or various combinations thereof. In particular, the methods described herein may be implemented by a series of computer-executable instructions residing on a storage medium such as a disk drive, or computer-readable medium.
  • The embodiments described herein relate to computer-implemented methods for determining and tracking a current location of a handheld pointing device.
  • In general, one or more depth sensing cameras (and, optionally, video cameras) can be used to generate a depth map of a physical scene. The depth map analysis and interpretation can be performed by a computing unit operatively coupled to or embedding the depth sensing camera. Some examples of computing units may include a desktop computer, laptop computer, tablet computer, gaming console, audio system, video system, cellular phone, smart phone, personal digital assistant (PDA), set-top box (STB), television set, smart television system, or any other wired or wireless electronic device. The computing unit may include or be operatively coupled to a communication unit which may communicate with various handheld pointing devices and, in particular, receive motion data of handheld pointing devices.
  • The term “handheld pointing device,” as used herein, refers to an input device or any other suitable remote controlling device which can be used for making an input. Some examples of handheld pointing devices include a remote controller, cellular phone, smart phone, video game console, handheld game console, computer (e.g., a tablet computer), and so forth. Regardless of what type of handheld pointing device is used, it may include various motion detectors, such as acceleration sensors, gyroscopes, or other detectors configured to measure velocity, momentum, and acceleration such as pitch, roll, and yaw (in other words, acceleration for X, Y, and Z movement in Cartesian axes), and/or orientation sensors to generate orientation data including pitch angles, roll angles, and yaw angles. In operation, the handheld pointing device determines motion data, which includes velocities and/or acceleration levels, and transmits it to the computing unit over a wired or wireless network.
  • The computing unit, in turn, interprets the depth map such that it may identify the user, generate a corresponding virtual skeleton of the user, which skeleton includes multiple “joints” and “bones,” and determine that the user made a gesture using his hands or arms. The coordinates of every joint can be determined by the computing unit, and thus every user hand/arm motion can be tracked, and corresponding motion data can be generated, which may include a velocity, acceleration, orientation, and so forth.
  • Further, the computing unit compares motion data associated with the user's hand/arm gesture and motion data (and optionally orientation data) associated with movement of the handheld pointing device. When both of these motion data coincide or correspond to each other, the computing unit determines that the handheld pointing device is held by a corresponding arm or hand of the user. Since coordinates of the user's arm/hand are known and tracked, the same coordinates are then assigned to the handheld pointing device. Therefore, the handheld pointing device can be tied to the virtual skeleton of the user so that the current location of the handheld pointing device can be determined and further monitored. In other words, the handheld pointing device is mapped on the depth map.
  • Once the handheld pointing device is tied to the user, movements of the handheld pointing device may be further tracked in real time to identify particular user gestures causing the computing unit to generate corresponding control commands. This approach can be used in various gaming and simulation/teaching software without a necessity to use immoderate computational resources, high resolution depth sensing cameras, or auxiliary devices (e.g., a lighting sphere) attached to the handheld pointing device to facilitate its identification on the depth map. The technology described herein provides an easy and effective method for locating the handheld pointing device on the scene and tracking its motions.
  • Provided below is a detailed description of various embodiments related to methods and systems for determining a position of a handheld pointing device.
  • With reference now to the drawings, FIG. 1 shows an example system environment 100 for providing a real time human-computer interface. The system environment 100 includes a gesture recognition control system 110, a display device 120, and an entertainment system 130.
  • The gesture recognition control system 110 is configured to capture various user gestures and user inputs, interpret them, and generate corresponding control commands, which are further transmitted to the entertainment system 130. Once the entertainment system 130 receives commands generated by the gesture recognition control system 110, the entertainment system performs certain actions depending on which software application is running. For example, the user may control a pointer on the display screen by making certain gestures.
  • The entertainment system 130 may refer to any electronic device such as a computer (e.g., a laptop computer, desktop computer, tablet computer, workstation, server), game console, television (TV) set, TV adapter, STB, smart television system, audio system, video system, cellular phone, smart phone, PDA, and so forth. Although the figure shows that the gesture recognition control system 110 and the entertainment system 130 are separate and stand-alone devices, in some alternative embodiments, these systems can be integrated within a single device.
  • FIG. 2 is a general illustration of a scene 200 suitable for controlling an electronic device by recognition of gestures made by a user. In particular, this figure shows a user 210 interacting with the gesture recognition control system 110 with the help of a handheld pointing device 220.
  • The gesture recognition control system 110 may include a depth sensing camera, a computing unit, and a communication unit, which can be stand-alone devices or embedded within a single housing (as shown). Generally speaking, the user and a corresponding environment, such as a living room, are located, at least in part, within the field of view of the depth sensing camera.
  • More specifically, the gesture recognition control system 110 may be configured to capture a depth map of the scene in real time and further process the depth map to identify the user, determine one or more user gestures, determine one or more user body parts, and generate corresponding control commands. The gesture recognition control system 110 may also determine specific motion data associated with user gestures, wherein the motion data may include coordinates of the user's hands or arms, and velocity and acceleration of the user's hands/arms. For this purpose, the gesture recognition control system 110 may generate a virtual skeleton of the user as shown in FIG. 3 and described below in greater details.
  • The handheld pointing device 220 may refer to a controller wand, remote control device (e.g., a gaming console remote controller), smart phone, cellular phone, PDA, tablet computer, or any other electronic device enabling the user 210 to generate specific commands by pressing dedicated buttons arranged thereon. The handheld pointing device 220 is configured to determine its velocity, acceleration and/or orientation within the space with the help of embedded acceleration sensors, gyroscopes, or other motion sensors and/or orientation sensors. The velocity, acceleration and/or orientation data can be transmitted to the gesture recognition control system 110 over a wireless or wire network. Accordingly, a communication module, which is configured to receive motion data (and optionally orientation data) associated with movements of handheld pointing device 220, may be embedded in the gesture recognition control system 110.
  • The gesture recognition control system 110 is also configured to determine the location of handheld pointing device 220 on the depth map by matching motion data associated with the gestures of one or more user's arms captured by the depth sensing camera and motion data (and optionally the orientation data) associated with movements of handheld pointing device 220 as received by the communication module. When the motions match each other, the gesture recognition control system 110 acknowledges that the handheld pointing device 220 is held in a particular hand of the user and then assigns coordinates of the user's hand to the handheld pointing device 220. In various embodiments, this technology can be used for determining that the handheld pointing device 220 is in “active use,” which means that the handheld pointing device 220 is held by the user 210 who is located in the sensitive area of the depth sensing camera.
  • FIG. 3A shows a simplified view of an exemplary virtual skeleton 300 as can be generated by the gesture recognition control system 110 based upon the depth map. As shown in the figure, the virtual skeleton 300 comprises a plurality of “bones” and “joints” 310 interconnecting the bones. The bones and joints, in combination, represent the user 210 in real time so that every motion of the user's limbs is represented by corresponding motions of the bones and joints.
  • According to various embodiments, each of the joints 310 may be associated with certain coordinates in a three-dimensional (3D) space defining its exact location. Hence, any motion of the user's limbs, such as an arm, may be interpreted by a plurality of coordinates or coordinate vectors related to the corresponding joint(s) 310. By tracking user motions via the virtual skeleton model, motion data can be generated for every limb movement. This motion data may include exact coordinates per period of time, velocity, direction, acceleration, orientation, and so forth.
  • FIG. 3B shows a simplified view of exemplary virtual skeleton 300 associated with the user 210 holding the handheld pointing device 220. In particular, when the gesture recognition control system 110 determines that the user 210 holds and the handheld pointing device 220 and then determines the location (coordinates) of the handheld pointing device 220, a corresponding mark or label can be generated on the virtual skeleton 300.
  • According to various embodiments, the gesture recognition control system 110 can determine an orientation of the handheld pointing device 220 by analyzing the virtual skeleton 300 and/or by acquiring orientation data from the handheld pointing device 220. In this case, the orientation of handheld pointing device 220 may be represented as a vector 320 as shown in FIG. 3B.
  • FIG. 4 shows an environment 400 suitable for implementing methods for determining a position of a handheld pointing device 220. As shown in this figure, there is provided the gesture recognition control system 110, which may comprise at least one depth sensing camera 410 configured to capture a depth map. The term “depth map,” as used herein, refers to an image or image channel that contains information relating to the distance of the surfaces of scene objects from a depth sensing camera. In various embodiments, the depth sensing camera 410 may include an infrared (IR) projector to generate modulated light, and also an IR camera to capture 3D images. In yet more example embodiments, the gesture recognition control system 110 may optionally comprise a color video camera 420 to capture a series of 2D images in addition to 3D imagery created by the depth sensing camera 410. The series of 2D images captured by the color video camera 420 may be used to facilitate identification of the user on the depth map and/or various gestures of the user. It should be also noted that the depth sensing camera 410 and the color video camera 420 can be either stand alone devices or be encased within a single housing.
  • Furthermore, the gesture recognition control system 110 may also comprise a computing unit 430 for processing depth data and generating control commands for one or more electronic devices 460 (e.g., the entertainment system 130). The computing unit 430 is also configured to implement steps of methods for determining a position of the handheld pointing device 220 as described herein.
  • The gesture recognition control system 110 also includes a communication module 440 configured to communicate with the handheld pointing device 220 and one or more electronic devices 460. More specifically, the communication module 440 is configured to receive motion data and orientation data from the handheld pointing device 220 and transmit control commands to one or more electronic devices 460.
  • The gesture recognition control system 110 may also include a bus 450 interconnecting the depth sensing camera 410, color video camera 420, computing unit 430, and communication module 440.
  • The aforementioned one or more electronic devices 460 can refer, in general, to any electronic device configured to trigger one or more predefined actions upon receipt of a certain control command. Some examples of electronic devices 460 include, but are not limited to, computers (e.g., laptop computers, tablet computers), displays, audio systems, video systems, gaming consoles, entertainment systems, lighting devices, cellular phones, smart phones, TVs, and so forth.
  • The communication between the communication module 440 and the handheld pointing device 220 and/or one or more electronic devices 460 can be performed via a network (not shown). The network can be a wireless or wired network, or a combination thereof. For example, the network may include the Internet, local intranet, PAN (Personal Area Network), LAN (Local Area Network), WAN (Wide Area Network), MAN (Metropolitan Area Network), virtual private network (VPN), storage area network (SAN), frame relay connection, Advanced Intelligent Network (AIN) connection, synchronous optical network (SONET) connection, digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, Ethernet connection, ISDN (Integrated Services Digital Network) line, dial-up port such as a V.90, V.34 or V.34bis analog modem connection, cable modem, ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface) or CDDI (Copper Distributed Data Interface) connection. Furthermore, communications may also include links to any of a variety of wireless networks including WAP (Wireless Application Protocol), GPRS (General Packet Radio Service), GSM (Global System for Mobile Communication), CDMA (Code Division Multiple Access) or TDMA (Time Division Multiple Access), cellular phone networks, Global Positioning System (GPS), CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network. The network can further include or interface with any one or more of the following: RS-232 serial connection, IEEE-1394 (Firewire) connection, Fiber Channel connection, IrDA (infrared) port, SCSI (Small Computer Systems Interface) connection, USB (Universal Serial Bus) connection, or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking.
  • FIG. 5 shows a simplified diagram of the handheld pointing device 220, according to an example embodiment. As shown in the figure, the handheld pointing device 220 comprises one or more motion sensors 510, one or more orientation sensors 520 and also a communication module 530. In various alternative embodiments, the handheld pointing device 220 may include additional modules (not shown), such as an input module, a computing module, a display, or any other modules, depending on the type of the handheld pointing device 220.
  • The motion sensors 510 and orientation sensors 520 may include gyroscopes, acceleration sensors, velocity sensors, and so forth. In general, the motion sensors 510 are configured to determine motion data which may include a velocity, momentum, and acceleration such as pitch, roll, and yaw (in other words, acceleration for X, Y, and Z movement in Cartesian axes) of the handheld pointing device 220. The orientation sensors 520 may determine a relative orientation of the handheld pointing device 220. In an example, the orientation sensors 520 may be configured to generate orientation data including one or more of the following: pitch angle, roll angle, and yaw angle related to the handheld pointing device 220. In operation, motion data and optionally orientation data are then transmitted to the gesture recognition control system 110 with the help of communication module 520. The motion data and orientation data can be transmitted via the network as described above.
  • FIG. 6 is a process flow diagram showing a method 600 for determining a position of the handheld pointing device 220, according to an example embodiment. The method 600 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both. In one example embodiment, the processing logic resides at the gesture recognition control system 110.
  • The method 600 can be performed by the units/devices discussed above with reference to FIG. 4. Each of these units or devices can comprise processing logic. It will be appreciated by one of ordinary skill in the art that examples of the foregoing units/devices may be virtual, and instructions said to be executed by a unit/device may in fact be retrieved and executed by a processor. The foregoing units/devices may also include memory cards, servers, and/or computer discs. Although various modules may be configured to perform some or all of the various steps described herein, fewer or more units may be provided and still fall within the scope of example embodiments.
  • As shown in FIG. 6, the method 600 may commence at operation 610, with the depth sensing camera 410 generating a depth map by capturing a plurality of depth values of the scene in real time.
  • At operation 620, the depth map can be analyzed by the computing unit 430 to identify the user 210 on the depth map. At operation 630, the computing unit 430 segments the depth data of the user 210 so as to generate a virtual skeleton of the user 210.
  • At operation 640, the computing unit 430 determines coordinates of at least one user's hand (user's arm or user's limb). The coordinates of the at least one user's hand can be associated with the virtual skeleton as discussed above.
  • At operation 650, the computing unit 430 determines a motion of the at least one user's hand by processing a plurality of depth maps. At operation 660, the computing unit 430 generates motion data of the at least one user's hand. At operation 670, the computing unit 430 acquires motion data and optionally orientation data of the handheld electronic device 220 via the communication module 440.
  • At operation 680, the computing unit 430 compares the motion data (and optionally orientation data) of handheld electronic device 220 as acquired at operation 670 and the motion data of the at least one user's hand as generated at operation 660. If the motion data of handheld electronic device 220 correspond (or match or are relatively similar) to the motion data of the user's hand, the computing unit 430 selectively assigns the coordinates of the user's hand to the handheld pointing device 220 at operation 690. Thus, the location of handheld pointing device 220 is determined on the depth map. Further, the location of handheld pointing device 220 can be tracked in real time so that various gestures can be interpreted for generation of corresponding control commands for one or more electronic devices 460.
  • In various embodiments, the described technology can be used for determining that the handheld pointing device 220 is in active use by the user 210. As mentioned earlier, the term “active use” means that the user 210 is identified on the depth map (see operation 620) or, in other words, is located within the viewing area of depth sensing camera 410 when the handheld pointing device 220 is moved.
  • In addition, the method 600 may further include operations (not shown) when the computing unit 430 generates a vector defining the current orientation of the handheld pointing device 220. The orientation of handheld pointing device 220 may be represented as the vector 320 (see FIG. 3B). The computing unit 430 generates the vector 320 by processing the orientation data of the handheld electronic device 220 as acquired at operation 670 by transforming the orientation data tied to an axis system of the handheld electronic device 220 to orientation data tied to an axis system of the gesture recognition control system 110. In other words, the vector coordinates are calculated for the axis system associated with the gesture recognition control system 110 based upon the vector coordinates in the axis system of the handheld electronic device 220.
  • FIG. 7 shows a diagrammatic representation of a computing device for a machine in the example electronic form of a computer system 700, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed. In example embodiments, the machine operates as a standalone device, or can be connected (e.g., networked) to other machines. In a networked deployment, the machine can operate in the capacity of a server, a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine can be a personal computer (PC), tablet PC, STB, PDA, cellular telephone, portable music player (e.g., a portable hard drive audio device, such as a Moving Picture Experts Group Audio Layer 3 (MP3) player), web appliance, network router, switch, bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that separately or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • The example computer system 700 includes a processor or multiple processors 702 (e.g., a central processing unit (CPU), graphics processing unit (GPU), or both), main memory 704 and static memory 706, which communicate with each other via a bus 708. The computer system 700 can further include a video display unit 710 (e.g., a liquid crystal display (LCD) or cathode ray tube (CRT)). The computer system 700 also includes at least one input device 712, such as an alphanumeric input device (e.g., a keyboard), pointer control device (e.g., a mouse), microphone, digital camera, video camera, and so forth. The computer system 700 also includes a disk drive unit 714, signal generation device 716 (e.g., a speaker), and network interface device 718.
  • The disk drive unit 714 includes a computer-readable medium 720 that stores one or more sets of instructions and data structures (e.g., instructions 722) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 722 can also reside, completely or at least partially, within the main memory 704 and/or within the processors 702 during execution by the computer system 700. The main memory 704 and the processors 702 also constitute machine-readable media.
  • The instructions 722 can further be transmitted or received over the network 724 via the network interface device 718 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP), CAN, Serial, and Modbus).
  • While the computer-readable medium 720 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine, and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like.
  • The example embodiments described herein may be implemented in an operating environment comprising computer-executable instructions (e.g., software) installed on a computer, in hardware, or in a combination of software and hardware. The computer-executable instructions may be written in a computer programming language or may be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions may be executed on a variety of hardware platforms and for interfaces associated with a variety of operating systems. Although not limited thereto, computer software programs for implementing the present method may be written in any number of suitable programming languages such as, for example, C, C++, C#, Cobol, Eiffel, Haskell, Visual Basic, Java, JavaScript, Python, or other compilers, assemblers, interpreters, or other computer languages or platforms.
  • Thus, methods and systems for determining a position of a handheld pointing device have been described. Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims (20)

1. A computer-implemented method for determining a position of a handheld pointing device, the method comprising:
determining one or more motions of one or more user hands on a depth map;
generating motion data associated with the one or more motions of the one or more user hands;
acquiring motion data of a handheld pointing device;
determining that the motion of the handheld pointing device is associated with the one or more motions of the one or more user hands;
and determining a current position of the handheld pointing device on the depth map.
2. The method of claim 1, further comprising generating the depth map by capturing a series of images.
3. The method of claim 1, further comprising generating a virtual skeleton of the user, the virtual skeleton comprising at least one virtual limb of the user.
4. The method of claim 3, further comprising:
determining coordinates of the one or more user hands, the coordinates to be associated with the virtual skeleton; and
generating motion data of the one or more user hands.
5. The method of claim 4, wherein determining that the motion of the handheld pointing device is associated with the one or more motions of the one or more user hands comprises comparing motion data of the one or more user hands and motion data of the handheld pointing device.
6. The method of claim 5, further comprising determining which hand is holding the pointing device.
7. The method of claim 6, further comprising selectively assigning the coordinates of the hand holding the handheld pointing device to the handheld pointing device.
8. The method of claim 7, further comprising determining an orientation of the handheld pointing device based upon the coordinates of various virtual skeleton joints related to the hand holding the handheld pointing device.
9. The method of claim 8, further comprising generating a vector associated with the orientation of the handheld pointing device.
10. The method of claim 1, further comprising acquiring orientation data of the handheld pointing device, wherein the orientation data comprising one or more of the following: a pitch angle, a roll angle, and a yaw angle.
11. The method of claim 1, wherein the motion data associated with a motion of the handheld pointing device comprises one or more of acceleration data, velocity data, and inertial data.
12. The method of claim 1, further comprising determining that the handheld pointing device is in active use by the user.
13. The method of claim 12, wherein the handheld pointing device is in active use by the user when the handheld pointing device is held and moved by the user and when the user is identified on the depth map.
14. The method of claim 1, further comprising identifying the user on the depth map.
15. The method of claim 1, further comprising tracking motions of the user hand.
16. A system for determining a position of a handheld pointing device, the system comprising:
a depth sensing device configured to generate a depth map;
a communication module configured to acquire motion data of a handheld pointing device; and
a computing unit communicatively coupled to the depth sensing device, the computing unit configured to:
determine one or more motions of one or more user hands on a depth map;
generate motion data associated with the one or more motions of the one or more user hands;
determine that the motion of the handheld pointing device is associated with the one or more motions of the one or more user hands; and
determine a current position of the handheld pointing device on the depth map.
17. The system of claim 16, further comprising a video camera communicatively coupled to the computing unit, the video camera being configured to facilitate generation of the depth map.
18. The system of claim 16, wherein the computing unit is further configured to generate a virtual skeleton of the user, the virtual skeleton comprising at least one virtual hand of the user.
19. The system of claim 16, wherein the computing unit is further configured to:
determine coordinates of the one or more user hands, the coordinates to be associated with the virtual skeleton;
generate motion data of the one or more user hands; and
compare motion data of the one or more user hands and motion data of the handheld pointing device.
20. A processor-readable nontransitory medium having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to:
determine one or more motions of one or more user hands on a depth map;
generate motion data associated with the one or more motions of the one or more user hands;
acquire motion data of a handheld pointing device;
determine that the motion of the handheld pointing device is associated with the one or more motions of the one or more user hands; and
determine a current position of the handheld pointing device on the depth map.
US13/541,684 2011-07-04 2012-07-04 Methods and systems for mapping pointing device on depth map Abandoned US20130010071A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/RU2013/000188 WO2013176574A1 (en) 2012-05-23 2013-03-12 Methods and systems for mapping pointing device on depth map
US13/855,743 US20140009384A1 (en) 2012-07-04 2013-04-03 Methods and systems for determining location of handheld device within 3d environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
RU2011127116/08A RU2455676C2 (en) 2011-07-04 2011-07-04 Method of controlling device using gestures and 3d sensor for realising said method
RU2011127116 2011-07-04

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/855,743 Continuation-In-Part US20140009384A1 (en) 2012-07-04 2013-04-03 Methods and systems for determining location of handheld device within 3d environment

Publications (1)

Publication Number Publication Date
US20130010071A1 true US20130010071A1 (en) 2013-01-10

Family

ID=44804813

Family Applications (4)

Application Number Title Priority Date Filing Date
US13/478,378 Active 2032-12-29 US8823642B2 (en) 2011-07-04 2012-05-23 Methods and systems for controlling devices using gestures and related 3D sensor
US13/478,457 Abandoned US20130010207A1 (en) 2011-07-04 2012-05-23 Gesture based interactive control of electronic equipment
US13/541,684 Abandoned US20130010071A1 (en) 2011-07-04 2012-07-04 Methods and systems for mapping pointing device on depth map
US13/541,681 Active 2033-03-08 US8896522B2 (en) 2011-07-04 2012-07-04 User-centric three-dimensional interactive control environment

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US13/478,378 Active 2032-12-29 US8823642B2 (en) 2011-07-04 2012-05-23 Methods and systems for controlling devices using gestures and related 3D sensor
US13/478,457 Abandoned US20130010207A1 (en) 2011-07-04 2012-05-23 Gesture based interactive control of electronic equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/541,681 Active 2033-03-08 US8896522B2 (en) 2011-07-04 2012-07-04 User-centric three-dimensional interactive control environment

Country Status (2)

Country Link
US (4) US8823642B2 (en)
RU (1) RU2455676C2 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140282278A1 (en) * 2013-03-14 2014-09-18 Glen J. Anderson Depth-based user interface gesture control
WO2014185808A1 (en) * 2013-05-13 2014-11-20 3Divi Company System and method for controlling multiple electronic devices
CN104349197A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Data processing method and device
US9076212B2 (en) 2006-05-19 2015-07-07 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US9144744B2 (en) 2013-06-10 2015-09-29 Microsoft Corporation Locating and orienting device in space
US9305365B2 (en) 2013-01-24 2016-04-05 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US20160202770A1 (en) * 2012-10-12 2016-07-14 Microsoft Technology Licensing, Llc Touchless input
US20160245646A1 (en) * 2015-02-25 2016-08-25 The Boeing Company Three dimensional manufacturing positioning system
US20160350589A1 (en) * 2015-05-27 2016-12-01 Hsien-Hsiang Chiu Gesture Interface Robot
US9606209B2 (en) 2011-08-26 2017-03-28 Kineticor, Inc. Methods, systems, and devices for intra-scan motion correction
US9628843B2 (en) * 2011-11-21 2017-04-18 Microsoft Technology Licensing, Llc Methods for controlling electronic devices using gestures
US9717461B2 (en) 2013-01-24 2017-08-01 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9734589B2 (en) 2014-07-23 2017-08-15 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9782141B2 (en) 2013-02-01 2017-10-10 Kineticor, Inc. Motion tracking system for real time adaptive motion compensation in biomedical imaging
US9943247B2 (en) 2015-07-28 2018-04-17 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
US10004462B2 (en) 2014-03-24 2018-06-26 Kineticor, Inc. Systems, methods, and devices for removing prospective motion correction from medical imaging scans
US10327708B2 (en) 2013-01-24 2019-06-25 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US10716515B2 (en) 2015-11-23 2020-07-21 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
CN113269075A (en) * 2021-05-19 2021-08-17 广州繁星互娱信息科技有限公司 Gesture track recognition method and device, storage medium and electronic equipment
US11331006B2 (en) 2019-03-05 2022-05-17 Physmodo, Inc. System and method for human motion detection and tracking
US11497961B2 (en) 2019-03-05 2022-11-15 Physmodo, Inc. System and method for human motion detection and tracking
US11556183B1 (en) * 2021-09-30 2023-01-17 Microsoft Technology Licensing, Llc Techniques for generating data for an intelligent gesture detector

Families Citing this family (92)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9414051B2 (en) 2010-07-20 2016-08-09 Memory Engine, Incorporated Extensible authoring and playback platform for complex virtual reality interactions and immersive applications
US9477302B2 (en) * 2012-08-10 2016-10-25 Google Inc. System and method for programing devices within world space volumes
US20150153715A1 (en) * 2010-09-29 2015-06-04 Google Inc. Rapidly programmable locations in space
US9030425B2 (en) 2011-04-19 2015-05-12 Sony Computer Entertainment Inc. Detection of interaction with virtual object from finger color change
EP3754997B1 (en) 2011-08-05 2023-08-30 Samsung Electronics Co., Ltd. Method for controlling electronic apparatus based on voice recognition and motion recognition, and electronic apparatus applying the same
WO2013022222A2 (en) * 2011-08-05 2013-02-14 Samsung Electronics Co., Ltd. Method for controlling electronic apparatus based on motion recognition, and electronic apparatus applying the same
US10150028B2 (en) * 2012-06-04 2018-12-11 Sony Interactive Entertainment Inc. Managing controller pairing in a multiplayer game
RU2012145783A (en) * 2012-10-26 2014-05-10 Дисплаир, Инк. METHOD AND DEVICE FOR RIGID CONTROL FOR MULTIMEDIA DISPLAY
CN103019586B (en) * 2012-11-16 2017-03-15 小米科技有限责任公司 User interface management method and device
US9459760B2 (en) 2012-11-16 2016-10-04 Xiaomi Inc. Method and device for managing a user interface
US10423214B2 (en) 2012-11-20 2019-09-24 Samsung Electronics Company, Ltd Delegating processing from wearable electronic device
US11157436B2 (en) 2012-11-20 2021-10-26 Samsung Electronics Company, Ltd. Services associated with wearable electronic device
US11372536B2 (en) 2012-11-20 2022-06-28 Samsung Electronics Company, Ltd. Transition and interaction model for wearable electronic device
US10551928B2 (en) 2012-11-20 2020-02-04 Samsung Electronics Company, Ltd. GUI transitions on wearable electronic device
US10185416B2 (en) 2012-11-20 2019-01-22 Samsung Electronics Co., Ltd. User gesture input to wearable electronic device involving movement of device
US8994827B2 (en) 2012-11-20 2015-03-31 Samsung Electronics Co., Ltd Wearable electronic device
US11237719B2 (en) 2012-11-20 2022-02-01 Samsung Electronics Company, Ltd. Controlling remote electronic device with wearable electronic device
WO2014118768A1 (en) * 2013-01-29 2014-08-07 Opgal Optronic Industries Ltd. Universal serial bus (usb) thermal imaging camera kit
US9083960B2 (en) 2013-01-30 2015-07-14 Qualcomm Incorporated Real-time 3D reconstruction with power efficient depth sensor usage
JP2014153663A (en) * 2013-02-13 2014-08-25 Sony Corp Voice recognition device, voice recognition method and program
KR102091028B1 (en) * 2013-03-14 2020-04-14 삼성전자 주식회사 Method for providing user's interaction using multi hovering gesture
WO2014149700A1 (en) * 2013-03-15 2014-09-25 Intel Corporation System and method for assigning voice and gesture command areas
RU2522848C1 (en) * 2013-05-14 2014-07-20 Федеральное государственное бюджетное учреждение "Национальный исследовательский центр "Курчатовский институт" Method of controlling device using eye gestures in response to stimuli
DE102013012285A1 (en) * 2013-07-24 2015-01-29 Giesecke & Devrient Gmbh Method and device for value document processing
KR102094347B1 (en) * 2013-07-29 2020-03-30 삼성전자주식회사 Auto-cleaning system, cleaning robot and controlling method thereof
US9645651B2 (en) 2013-09-24 2017-05-09 Microsoft Technology Licensing, Llc Presentation of a control interface on a touch-enabled device based on a motion or absence thereof
US10021247B2 (en) 2013-11-14 2018-07-10 Wells Fargo Bank, N.A. Call center interface
US9864972B2 (en) 2013-11-14 2018-01-09 Wells Fargo Bank, N.A. Vehicle interface
US10037542B2 (en) 2013-11-14 2018-07-31 Wells Fargo Bank, N.A. Automated teller machine (ATM) interface
WO2015076695A1 (en) * 2013-11-25 2015-05-28 Yandex Llc System, method and user interface for gesture-based scheduling of computer tasks
JP2017505553A (en) * 2013-11-29 2017-02-16 インテル・コーポレーション Camera control by face detection
KR102188090B1 (en) * 2013-12-11 2020-12-04 엘지전자 주식회사 A smart home appliance, a method for operating the same and a system for voice recognition using the same
CN105993038A (en) * 2014-02-07 2016-10-05 皇家飞利浦有限公司 Method of operating a control system and control system therefore
US9911351B2 (en) * 2014-02-27 2018-03-06 Microsoft Technology Licensing, Llc Tracking objects during processes
US10691332B2 (en) 2014-02-28 2020-06-23 Samsung Electronics Company, Ltd. Text input on an interactive display
US9684827B2 (en) * 2014-03-26 2017-06-20 Microsoft Technology Licensing, Llc Eye gaze tracking based upon adaptive homography mapping
KR20150112337A (en) * 2014-03-27 2015-10-07 삼성전자주식회사 display apparatus and user interaction method thereof
US10481561B2 (en) 2014-04-24 2019-11-19 Vivint, Inc. Managing home automation system based on behavior
US10203665B2 (en) * 2014-04-24 2019-02-12 Vivint, Inc. Managing home automation system based on behavior and user input
CN104020878A (en) * 2014-05-22 2014-09-03 小米科技有限责任公司 Touch input control method and device
WO2016003100A1 (en) * 2014-06-30 2016-01-07 Alticast Corporation Method for displaying information and displaying device thereof
WO2016007192A1 (en) 2014-07-10 2016-01-14 Ge Intelligent Platforms, Inc. Apparatus and method for electronic labeling of electronic equipment
CN105282375B (en) * 2014-07-24 2019-12-31 钰立微电子股份有限公司 Attached stereo scanning module
US9594489B2 (en) 2014-08-12 2017-03-14 Microsoft Technology Licensing, Llc Hover-based interaction with rendered content
US20160085958A1 (en) * 2014-09-22 2016-03-24 Intel Corporation Methods and apparatus for multi-factor user authentication with two dimensional cameras
US20160088804A1 (en) * 2014-09-29 2016-03-31 King Abdullah University Of Science And Technology Laser-based agriculture system
US10268277B2 (en) * 2014-09-30 2019-04-23 Hewlett-Packard Development Company, L.P. Gesture based manipulation of three-dimensional images
KR101556521B1 (en) * 2014-10-06 2015-10-13 현대자동차주식회사 Human Machine Interface apparatus, vehicle having the same and method for controlling the same
US9946339B2 (en) * 2014-10-08 2018-04-17 Microsoft Technology Licensing, Llc Gaze tracking through eyewear
CA2965329C (en) * 2014-10-23 2023-04-04 Vivint, Inc. Managing home automation system based on behavior and user input
US10301801B2 (en) 2014-12-18 2019-05-28 Delta Faucet Company Faucet including capacitive sensors for hands free fluid flow control
US11078652B2 (en) 2014-12-18 2021-08-03 Delta Faucet Company Faucet including capacitive sensors for hands free fluid flow control
US9454235B2 (en) 2014-12-26 2016-09-27 Seungman KIM Electronic apparatus having a sensing unit to input a user command and a method thereof
US10481696B2 (en) * 2015-03-03 2019-11-19 Nvidia Corporation Radar based user interface
US10031722B1 (en) * 2015-03-17 2018-07-24 Amazon Technologies, Inc. Grouping devices for voice control
US9594967B2 (en) 2015-03-31 2017-03-14 Google Inc. Method and apparatus for identifying a person by measuring body part distances of the person
US9888090B2 (en) * 2015-04-27 2018-02-06 Intel Corporation Magic wand methods, apparatuses and systems
CN107787497B (en) * 2015-06-10 2021-06-22 维塔驰有限公司 Method and apparatus for detecting gestures in a user-based spatial coordinate system
KR101697200B1 (en) * 2015-06-12 2017-01-17 성균관대학교산학협력단 Embedded system, fast structured light based 3d camera system and method for obtaining 3d images using the same
US10655951B1 (en) 2015-06-25 2020-05-19 Amazon Technologies, Inc. Determining relative positions of user devices
US10365620B1 (en) 2015-06-30 2019-07-30 Amazon Technologies, Inc. Interoperability of secondary-device hubs
JP6650595B2 (en) * 2015-09-24 2020-02-19 パナソニックIpマネジメント株式会社 Device control device, device control method, device control program, and recording medium
US9692756B2 (en) 2015-09-24 2017-06-27 Intel Corporation Magic wand methods, apparatuses and systems for authenticating a user of a wand
US10328342B2 (en) 2015-09-24 2019-06-25 Intel Corporation Magic wand methods, apparatuses and systems for defining, initiating, and conducting quests
KR20170048972A (en) * 2015-10-27 2017-05-10 삼성전자주식회사 Apparatus and Method for generating image
US9408452B1 (en) 2015-11-19 2016-08-09 Khaled A. M. A. A. Al-Khulaifi Robotic hair dryer holder system with tracking
CN107924239B (en) * 2016-02-23 2022-03-18 索尼公司 Remote control system, remote control method, and recording medium
US20180164895A1 (en) * 2016-02-23 2018-06-14 Sony Corporation Remote control apparatus, remote control method, remote control system, and program
KR20170124104A (en) * 2016-04-29 2017-11-09 주식회사 브이터치 Method and apparatus for optimal control based on motion-voice multi-modal command
US10845987B2 (en) * 2016-05-03 2020-11-24 Intelligent Platforms, Llc System and method of using touch interaction based on location of touch on a touch screen
US11079915B2 (en) 2016-05-03 2021-08-03 Intelligent Platforms, Llc System and method of using multiple touch inputs for controller interaction in industrial control systems
US10076842B2 (en) * 2016-09-28 2018-09-18 Cognex Corporation Simultaneous kinematic and hand-eye calibration
EP4220630A1 (en) 2016-11-03 2023-08-02 Samsung Electronics Co., Ltd. Electronic device and controlling method thereof
DE102016124906A1 (en) * 2016-12-20 2017-11-30 Miele & Cie. Kg Method of controlling a floor care appliance and floor care appliance
US10764281B1 (en) * 2017-01-09 2020-09-01 United Services Automobile Association (Usaa) Systems and methods for authenticating a user using an image capture device
US11321951B1 (en) * 2017-01-19 2022-05-03 State Farm Mutual Automobile Insurance Company Apparatuses, systems and methods for integrating vehicle operator gesture detection within geographic maps
KR20180098079A (en) * 2017-02-24 2018-09-03 삼성전자주식회사 Vision-based object recognition device and method for controlling thereof
CN106919928A (en) * 2017-03-08 2017-07-04 京东方科技集团股份有限公司 gesture recognition system, method and display device
TWI604332B (en) * 2017-03-24 2017-11-01 緯創資通股份有限公司 Method, system, and computer-readable recording medium for long-distance person identification
RU2693197C2 (en) * 2017-05-04 2019-07-01 Федеральное государственное бюджетное образовательное учреждение высшего образования "Сибирский государственный университет телекоммуникаций и информатики" (СибГУТИ) Universal operator intelligent 3-d interface
US11290518B2 (en) * 2017-09-27 2022-03-29 Qualcomm Incorporated Wireless control of remote devices through intention codes over a wireless connection
CN110377145B (en) * 2018-04-13 2021-03-30 北京京东尚科信息技术有限公司 Electronic device determination method, system, computer system and readable storage medium
KR102524586B1 (en) 2018-04-30 2023-04-21 삼성전자주식회사 Image display device and operating method for the same
KR20200013162A (en) 2018-07-19 2020-02-06 삼성전자주식회사 Electronic apparatus and control method thereof
RU2717145C2 (en) * 2018-07-23 2020-03-18 Николай Дмитриевич Куликов Method of inputting coordinates (versions), a capacitive touch screen (versions), a capacitive touch panel (versions) and an electric capacity converter for determining coordinates of a geometric center of a two-dimensional area (versions)
RU2695053C1 (en) * 2018-09-18 2019-07-18 Общество С Ограниченной Ответственностью "Заботливый Город" Method and device for control of three-dimensional objects in virtual space
KR20200066962A (en) * 2018-12-03 2020-06-11 삼성전자주식회사 Electronic device and method for providing content based on the motion of the user
EP3667460A1 (en) * 2018-12-14 2020-06-17 InterDigital CE Patent Holdings Methods and apparatus for user -device interaction
KR102236727B1 (en) * 2019-05-10 2021-04-06 (주)엔플러그 Health care system and method using lighting device based on IoT
US11732994B1 (en) 2020-01-21 2023-08-22 Ibrahim Pasha Laser tag mobile station apparatus system, method and computer program product
RU2737231C1 (en) * 2020-03-27 2020-11-26 Федеральное государственное бюджетное учреждение науки "Санкт-Петербургский Федеральный исследовательский центр Российской академии наук" (СПб ФИЦ РАН) Method of multimodal contactless control of mobile information robot
US20220253153A1 (en) * 2021-02-10 2022-08-11 Universal City Studios Llc Interactive pepper's ghost effect system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070060336A1 (en) * 2003-09-15 2007-03-15 Sony Computer Entertainment Inc. Methods and systems for enabling depth and direction detection when interfacing with a computer program
US20080316324A1 (en) * 2007-06-22 2008-12-25 Broadcom Corporation Position detection and/or movement tracking via image capture and processing

Family Cites Families (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4836670A (en) * 1987-08-19 1989-06-06 Center For Innovative Technology Eye movement detector
JPH0981309A (en) * 1995-09-13 1997-03-28 Toshiba Corp Input device
US6351273B1 (en) * 1997-04-30 2002-02-26 Jerome H. Lemelson System and methods for controlling automatic scrolling of information on a display or screen
KR100595922B1 (en) * 1998-01-26 2006-07-05 웨인 웨스터만 Method and apparatus for integrating manual input
US7224526B2 (en) * 1999-12-08 2007-05-29 Neurok Llc Three-dimensional free space image projection employing Fresnel lenses
GB0004165D0 (en) 2000-02-22 2000-04-12 Digimask Limited System for virtual three-dimensional object creation and use
EP1311803B8 (en) * 2000-08-24 2008-05-07 VDO Automotive AG Method and navigation device for querying target information and navigating within a map view
US6678413B1 (en) * 2000-11-24 2004-01-13 Yiqing Liang System and method for object identification and behavior characterization using video analysis
US7274800B2 (en) * 2001-07-18 2007-09-25 Intel Corporation Dynamic gesture recognition from stereo sequences
US7340077B2 (en) 2002-02-15 2008-03-04 Canesta, Inc. Gesture recognition system using depth perceptive sensors
US7883415B2 (en) * 2003-09-15 2011-02-08 Sony Computer Entertainment Inc. Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion
US8019121B2 (en) * 2002-07-27 2011-09-13 Sony Computer Entertainment Inc. Method and system for processing intensity from input devices for interfacing with a computer program
US7665041B2 (en) 2003-03-25 2010-02-16 Microsoft Corporation Architecture for controlling a computer using hand gestures
US7372977B2 (en) 2003-05-29 2008-05-13 Honda Motor Co., Ltd. Visual tracking using depth data
US7561143B1 (en) * 2004-03-19 2009-07-14 The University of the Arts Using gaze actions to interact with a display
US7893920B2 (en) 2004-05-06 2011-02-22 Alpine Electronics, Inc. Operation input device and method of operation input
JP5631535B2 (en) 2005-02-08 2014-11-26 オブロング・インダストリーズ・インコーポレーテッド System and method for a gesture-based control system
JP2008536196A (en) 2005-02-14 2008-09-04 ヒルクレスト・ラボラトリーズ・インコーポレイテッド Method and system for enhancing television applications using 3D pointing
US8313379B2 (en) 2005-08-22 2012-11-20 Nintendo Co., Ltd. Video game system with wireless modular handheld controller
US8094928B2 (en) 2005-11-14 2012-01-10 Microsoft Corporation Stereo video for gaming
US8549442B2 (en) * 2005-12-12 2013-10-01 Sony Computer Entertainment Inc. Voice and video control of interactive electronically simulated environment
TWI348639B (en) * 2005-12-16 2011-09-11 Ind Tech Res Inst Motion recognition system and method for controlling electronic device
RU2410259C2 (en) * 2006-03-22 2011-01-27 Фольксваген Аг Interactive control device and method of operating interactive control device
US8601379B2 (en) 2006-05-07 2013-12-03 Sony Computer Entertainment Inc. Methods for interactive communications with real time effects and avatar environment interaction
US8277316B2 (en) 2006-09-14 2012-10-02 Nintendo Co., Ltd. Method and apparatus for using a common pointing input to control 3D viewpoint and object targeting
US7775439B2 (en) 2007-01-04 2010-08-17 Fuji Xerox Co., Ltd. Featured wands for camera calibration and as a gesture based 3D interface device
WO2008087652A2 (en) 2007-01-21 2008-07-24 Prime Sense Ltd. Depth mapping using multi-beam illumination
WO2008120217A2 (en) 2007-04-02 2008-10-09 Prime Sense Ltd. Depth mapping using projected patterns
US8494252B2 (en) 2007-06-19 2013-07-23 Primesense Ltd. Depth mapping using optical elements having non-uniform focal characteristics
RU2382408C2 (en) * 2007-09-13 2010-02-20 Институт прикладной физики РАН Method and system for identifying person from facial image
US20090128555A1 (en) 2007-11-05 2009-05-21 Benman William J System and method for creating and using live three-dimensional avatars and interworld operability
US20090140887A1 (en) * 2007-11-29 2009-06-04 Breed David S Mapping Techniques Using Probe Vehicles
US8542907B2 (en) 2007-12-17 2013-09-24 Sony Computer Entertainment America Llc Dynamic three-dimensional object mapping for user-defined control device
CA2615406A1 (en) 2007-12-19 2009-06-19 Inspeck Inc. System and method for obtaining a live performance in a video game or movie involving massively 3d digitized human face and object
US20090172606A1 (en) * 2007-12-31 2009-07-02 Motorola, Inc. Method and apparatus for two-handed computer user interface with gesture recognition
US8192285B2 (en) 2008-02-11 2012-06-05 Nintendo Co., Ltd Method and apparatus for simulating games involving a ball
US20110102570A1 (en) * 2008-04-14 2011-05-05 Saar Wilf Vision based pointing device emulation
JP2009258884A (en) * 2008-04-15 2009-11-05 Toyota Central R&D Labs Inc User interface
CN101344816B (en) * 2008-08-15 2010-08-11 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating
US20100079413A1 (en) * 2008-09-29 2010-04-01 Denso Corporation Control device
JP2010086336A (en) * 2008-09-30 2010-04-15 Fujitsu Ltd Image control apparatus, image control program, and image control method
WO2010045406A2 (en) * 2008-10-15 2010-04-22 The Regents Of The University Of California Camera system with autonomous miniature camera and light source assembly and method for image enhancement
US20100195867A1 (en) 2009-01-30 2010-08-05 Microsoft Corporation Visual target tracking using model fitting and exemplar
US20100199228A1 (en) 2009-01-30 2010-08-05 Microsoft Corporation Gesture Keyboarding
US8253746B2 (en) * 2009-05-01 2012-08-28 Microsoft Corporation Determine intended motions
US9176628B2 (en) * 2009-07-23 2015-11-03 Hewlett-Packard Development Company, L.P. Display with an optical sensor
US8502864B1 (en) * 2009-07-28 2013-08-06 Robert Watkins Systems, devices, and/or methods for viewing images
KR101596890B1 (en) 2009-07-29 2016-03-07 삼성전자주식회사 Apparatus and method for navigation digital object using gaze information of user
US8565479B2 (en) 2009-08-13 2013-10-22 Primesense Ltd. Extraction of skeletons from 3D maps
GB2483168B (en) 2009-10-13 2013-06-12 Pointgrab Ltd Computer vision gesture based control of a device
US9244533B2 (en) * 2009-12-17 2016-01-26 Microsoft Technology Licensing, Llc Camera navigation for presentations
KR20110071213A (en) 2009-12-21 2011-06-29 한국전자통신연구원 Apparatus and method for 3d face avatar reconstruction using stereo vision and face detection unit
US20110216059A1 (en) 2010-03-03 2011-09-08 Raytheon Company Systems and methods for generating real-time three-dimensional graphics in an area of interest
US8351651B2 (en) 2010-04-26 2013-01-08 Microsoft Corporation Hand-location post-process refinement in a tracking system
US20110289455A1 (en) 2010-05-18 2011-11-24 Microsoft Corporation Gestures And Gesture Recognition For Manipulating A User-Interface
US20110296333A1 (en) * 2010-05-25 2011-12-01 Bateman Steven S User interaction gestures with virtual keyboard
US20110292036A1 (en) 2010-05-31 2011-12-01 Primesense Ltd. Depth sensor with application interface
US20120200600A1 (en) * 2010-06-23 2012-08-09 Kent Demaine Head and arm detection for virtual immersion systems and methods
US8593375B2 (en) * 2010-07-23 2013-11-26 Gregory A Maltz Eye gaze user interface and method
US20120056982A1 (en) * 2010-09-08 2012-03-08 Microsoft Corporation Depth camera based on structured light and stereo vision
US9349040B2 (en) 2010-11-19 2016-05-24 Microsoft Technology Licensing, Llc Bi-modal depth-image analysis
US9008904B2 (en) * 2010-12-30 2015-04-14 GM Global Technology Operations LLC Graphical vehicle command system for autonomous vehicles on full windshield head-up display
JP6126076B2 (en) * 2011-03-29 2017-05-10 クアルコム,インコーポレイテッド A system for rendering a shared digital interface for each user's perspective

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070060336A1 (en) * 2003-09-15 2007-03-15 Sony Computer Entertainment Inc. Methods and systems for enabling depth and direction detection when interfacing with a computer program
US20080316324A1 (en) * 2007-06-22 2008-12-25 Broadcom Corporation Position detection and/or movement tracking via image capture and processing

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9867549B2 (en) 2006-05-19 2018-01-16 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US9076212B2 (en) 2006-05-19 2015-07-07 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US10869611B2 (en) 2006-05-19 2020-12-22 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US9138175B2 (en) 2006-05-19 2015-09-22 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US10663553B2 (en) 2011-08-26 2020-05-26 Kineticor, Inc. Methods, systems, and devices for intra-scan motion correction
US9606209B2 (en) 2011-08-26 2017-03-28 Kineticor, Inc. Methods, systems, and devices for intra-scan motion correction
US9628843B2 (en) * 2011-11-21 2017-04-18 Microsoft Technology Licensing, Llc Methods for controlling electronic devices using gestures
US10019074B2 (en) * 2012-10-12 2018-07-10 Microsoft Technology Licensing, Llc Touchless input
US20160202770A1 (en) * 2012-10-12 2016-07-14 Microsoft Technology Licensing, Llc Touchless input
US10327708B2 (en) 2013-01-24 2019-06-25 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9305365B2 (en) 2013-01-24 2016-04-05 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US9779502B1 (en) 2013-01-24 2017-10-03 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US9717461B2 (en) 2013-01-24 2017-08-01 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9607377B2 (en) 2013-01-24 2017-03-28 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US10339654B2 (en) 2013-01-24 2019-07-02 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US9782141B2 (en) 2013-02-01 2017-10-10 Kineticor, Inc. Motion tracking system for real time adaptive motion compensation in biomedical imaging
US10653381B2 (en) 2013-02-01 2020-05-19 Kineticor, Inc. Motion tracking system for real time adaptive motion compensation in biomedical imaging
US20140282278A1 (en) * 2013-03-14 2014-09-18 Glen J. Anderson Depth-based user interface gesture control
US9389779B2 (en) * 2013-03-14 2016-07-12 Intel Corporation Depth-based user interface gesture control
WO2014185808A1 (en) * 2013-05-13 2014-11-20 3Divi Company System and method for controlling multiple electronic devices
US9144744B2 (en) 2013-06-10 2015-09-29 Microsoft Corporation Locating and orienting device in space
US8964128B1 (en) * 2013-08-09 2015-02-24 Beijing Lenovo Software Ltd. Image data processing method and apparatus
US20150042893A1 (en) * 2013-08-09 2015-02-12 Lenovo (Beijing) Co., Ltd. Image data processing method and apparatus
CN104349197A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Data processing method and device
US10004462B2 (en) 2014-03-24 2018-06-26 Kineticor, Inc. Systems, methods, and devices for removing prospective motion correction from medical imaging scans
US9734589B2 (en) 2014-07-23 2017-08-15 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US10438349B2 (en) 2014-07-23 2019-10-08 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US11100636B2 (en) 2014-07-23 2021-08-24 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US20160245646A1 (en) * 2015-02-25 2016-08-25 The Boeing Company Three dimensional manufacturing positioning system
US10310080B2 (en) * 2015-02-25 2019-06-04 The Boeing Company Three dimensional manufacturing positioning system
US9696813B2 (en) * 2015-05-27 2017-07-04 Hsien-Hsiang Chiu Gesture interface robot
US20160350589A1 (en) * 2015-05-27 2016-12-01 Hsien-Hsiang Chiu Gesture Interface Robot
US9943247B2 (en) 2015-07-28 2018-04-17 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
US10660541B2 (en) 2015-07-28 2020-05-26 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
US10716515B2 (en) 2015-11-23 2020-07-21 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US11331006B2 (en) 2019-03-05 2022-05-17 Physmodo, Inc. System and method for human motion detection and tracking
US11497961B2 (en) 2019-03-05 2022-11-15 Physmodo, Inc. System and method for human motion detection and tracking
US11547324B2 (en) 2019-03-05 2023-01-10 Physmodo, Inc. System and method for human motion detection and tracking
US11771327B2 (en) 2019-03-05 2023-10-03 Physmodo, Inc. System and method for human motion detection and tracking
US11826140B2 (en) 2019-03-05 2023-11-28 Physmodo, Inc. System and method for human motion detection and tracking
CN113269075A (en) * 2021-05-19 2021-08-17 广州繁星互娱信息科技有限公司 Gesture track recognition method and device, storage medium and electronic equipment
US11556183B1 (en) * 2021-09-30 2023-01-17 Microsoft Technology Licensing, Llc Techniques for generating data for an intelligent gesture detector

Also Published As

Publication number Publication date
US20130010207A1 (en) 2013-01-10
US20130009865A1 (en) 2013-01-10
RU2455676C2 (en) 2012-07-10
US8896522B2 (en) 2014-11-25
RU2011127116A (en) 2011-10-10
US8823642B2 (en) 2014-09-02
US20130009861A1 (en) 2013-01-10

Similar Documents

Publication Publication Date Title
US20130010071A1 (en) Methods and systems for mapping pointing device on depth map
US11392212B2 (en) Systems and methods of creating a realistic displacement of a virtual object in virtual reality/augmented reality environments
US10761612B2 (en) Gesture recognition techniques
US20140009384A1 (en) Methods and systems for determining location of handheld device within 3d environment
US8933931B2 (en) Distributed asynchronous localization and mapping for augmented reality
US20150070274A1 (en) Methods and systems for determining 6dof location and orientation of head-mounted display and associated user movements
US9626801B2 (en) Visualization of physical characteristics in augmented reality
CN110457414A (en) Offline map processing, virtual objects display methods, device, medium and equipment
JP5807686B2 (en) Image processing apparatus, image processing method, and program
WO2014185808A1 (en) System and method for controlling multiple electronic devices
CN110473293A (en) Virtual objects processing method and processing device, storage medium and electronic equipment
US11615506B2 (en) Dynamic over-rendering in late-warping
Vokorokos et al. Motion sensors: Gesticulation efficiency across multiple platforms
WO2013176574A1 (en) Methods and systems for mapping pointing device on depth map
WO2022246389A1 (en) Dynamic over-rendering in late-warping
WO2015030623A1 (en) Methods and systems for locating substantially planar surfaces of 3d scene
KR101558094B1 (en) Multi-modal system using for intuitive hand motion and control method thereof
US20220375026A1 (en) Late warping to minimize latency of moving objects
WO2023124113A1 (en) Interaction method and apparatus in three-dimensional space, storage medium, and electronic apparatus
US20240127006A1 (en) Sign language interpretation with collaborative agents
KR20240008370A (en) Late warping to minimize latency for moving objects
CN115317907A (en) Multi-user virtual interaction method and device in AR application and AR equipment
Babaei et al. The optimization of interface interactivity using gesture prediction engine

Legal Events

Date Code Title Description
AS Assignment

Owner name: 3DIVI, RUSSIAN FEDERATION

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VALIK, ANDREY;ZAITSEV, PAVEL;MOROZOV, DMITRY;AND OTHERS;REEL/FRAME:028500/0443

Effective date: 20120629

AS Assignment

Owner name: 3DIVI COMPANY, RUSSIAN FEDERATION

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE FROM 3DIVI TO 3DIVI COMPANY PREVIOUSLY RECORDED ON REEL 028500 FRAME 0443. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF PATENT APPLICATION NO. 13541684;ASSIGNORS:VALIK, ANDREY;ZAITSEV, PAVEL;MOROZOV, DMITRY;AND OTHERS;REEL/FRAME:030519/0577

Effective date: 20130514

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION