US20130010207A1 - Gesture based interactive control of electronic equipment - Google Patents
Gesture based interactive control of electronic equipment Download PDFInfo
- Publication number
- US20130010207A1 US20130010207A1 US13/478,457 US201213478457A US2013010207A1 US 20130010207 A1 US20130010207 A1 US 20130010207A1 US 201213478457 A US201213478457 A US 201213478457A US 2013010207 A1 US2013010207 A1 US 2013010207A1
- Authority
- US
- United States
- Prior art keywords
- user
- predetermined
- electronic devices
- recognition
- qualifying action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Definitions
- This disclosure relates generally to computer interfaces and, more particularly, to methods for controlling electronic equipment by recognition of gestures made by an object.
- Interactive gesture interface systems are commonly used to interact with various electronic devices including gaming consoles, Television (TV) sets, computers, and so forth.
- the general principle of such systems is to detect human gestures or motions made by users, and generate commands based thereupon that cause electronic devices to perform certain actions.
- Gestures can originate from any bodily motion or state, but commonly originate from the face or hand.
- some systems also include emotion recognition features.
- the gesture interface systems may be based on various gesture recognition approaches that involve the utilization of cameras, moving sensors, acceleration sensors, position sensors, electronic handheld controllers, and so forth. Whichever approach is used, human gestures can be captured and recognized, and a particular action can be triggered by an electronic device.
- Particular examples may include wireless electronic handheld controllers, which enable users to control gaming consoles by detecting motions or gestures made by such controllers. While such systems became very popular, they are still quite complex and require the utilization of various handheld controllers that are typically different for different applications.
- Another approach involves utilization of 3D sensor devices capable of recognizing users' gestures or motions without dedicated handheld controllers or the like. Gestures are identified by processing users' images obtained by such 3D-sensors, and then they are interpreted to generate a control command. Control commands can be used to trigger particular actions performed by electronic equipment coupled to the 3D-sensor. Such systems are now widely deployed and generally used for gaming consoles.
- One of the major drawbacks of such systems is that they are not flexible and cannot generate control commands for multiple electronic devices concurrently connected to a single 3D-sensor or any other device for capturing human motions or gestures.
- the conventional technology fails to provide a technique for improved detection and interpretation of human gestures associated with a particular electronic device among a plurality of devices connected to the common 3D-sensor.
- methods and systems for controlling one or more electronic devices by recognition of gestures made by an object enable users to interact with one or a plurality of electronic devices such as gaming consoles, computers, audio systems, video systems, and so forth.
- the interaction with various electronic devices can be performed with the help of at least one 3D-sensor being configured to recognize not only gestures, but also a particular electronic device among the plurality of electronic devices to which the gestures are dedicated.
- a computer-implemented method for controlling one or more electronic devices by recognition of gestures made by an object may comprise capturing a series of successive 3D images in real time and identifying the object.
- the object may have a predetermined elongated shape.
- the method may also comprise identifying that the object is oriented substantially towards a predetermined direction, determining at least one qualifying action being performed by an user and/or the object, comparing the at least one qualifying action to one or more predetermined actions associated with the direction to which the object is oriented towards, and, based on the comparison, selectively providing to the one or more electronic devices a command associated with the at least one qualifying action.
- the predetermined direction can be associated with the one or more electronic devices.
- the object can be selected from a group comprising a wand, an elongated pointing device, an arm, a hand, and one or more fingers of the user.
- the series of successive 3D images can be captured using at least one video camera or a 3D image sensor.
- the object can be identified by performing one or more of: processing the captured series of successive 3D images to generate a depth map, determining geometrical parameters of the object, and identifying the object by matching the geometrical parameters to a predetermined object database.
- the determination of at least one qualifying action may comprise the determining and acknowledging of one or more of: a predetermined motion of the object, a predetermined gesture of the object, a gaze of the user towards the predetermined direction associated with one or more electronic devices, a predetermined motion of the user, a predetermined gesture of the user, biometric data of the user, and a voice command provided by the user.
- Biometric data of the user can be determined based on one or more of the following: face recognition, voice recognition, user body recognition, and recognition of a user motion dynamics pattern.
- the gaze of the user can be determined based on one or more of the following: a position of the eyes of the user, a position of the pupils or a contour of the irises of the eyes of the user, a position of the head of the user, an angle of inclination of the head of the user, and a rotation of the head of the user.
- the mentioned one or more electronic devices may comprise a computer, a game console, a TV set, a TV adapter, a communication device, a Personal Digital Assistant (PDA), a lighting device, an audio system, and a video system.
- a system for controlling one or more electronic devices by recognition of gestures made by an object may comprise at least one 3D image sensor configured to capture a series of successive 3D images in real time and a computing unit communicatively coupled to the at least one 3D image sensor.
- the computing unit can be configured to: identify the object; identify that the object is oriented substantially towards a predetermined direction; determine at least one qualifying action being performed by a user and/or the object; compare the at least one qualifying action to one or more predetermined actions associated with the direction to which the object is oriented towards; and, based on the comparison, selectively provide to the one or more electronic devices a command associated with the at least one qualifying action.
- the at least one 3D image sensor may comprise one or more of an infrared (IR) projector to generate modulated light, an IR camera to capture 3D images associated with the object or the user, and a color video camera.
- IR infrared
- the IR projector, color video camera, and IR camera can be installed in a common housing.
- the color video camera and/or IR camera can be equipped with liquid lenses.
- the mentioned predetermined direction can associated with the one or more electronic devices.
- the object can be selected from a group comprising a wand, an elongated pointing device, an arm, a hand, and one or more fingers of the user.
- the computing unit can be configured to identify the object by performing the acts of: processing the captured series of successive 3D images to generate a depth map, determining geometrical parameters of the object, and identifying the object by matching the geometrical parameters to a predetermined object database. Furthermore, the computing unit can be configured to determine at least one qualifying action by performing the acts of determining and acknowledging of one or more of: a predetermined motion of the object, a predetermined gesture of the object, a gaze of the user towards the predetermined direction associated with one or more electronic devices, a predetermined motion of the user, a predetermined gesture of the user, biometric data of the user, and a voice command provided by the user.
- Biometric data of the user can be determined based on one or more of the following: face recognition, voice recognition, user body recognition, and recognition of user motion dynamics pattern.
- the gaze of the user can be determined based on one or more of the following: a position of the eyes of the user, a position of the pupils or a contour of the irises of the eyes of the user, a position of the head of the user, an angle of inclination of the head of the user, and a rotation of the head of the user.
- a processor-readable medium may store instructions, which when executed by one or more processors, cause the one or more processors to: capture a series of successive 3D images in real time; identify the object; identify that the object is oriented substantially towards a predetermined direction; determine at least one qualifying action being performed by an user and/or the object; compare the at least one qualifying action to one or more predetermined actions associated with the direction to which the object is oriented towards; and, based on the comparison, selectively provide to the one or more electronic devices a command associated with the at least one qualifying action.
- the one or more aspects comprise the features hereinafter fully described and particularly pointed out in the claims.
- the following description and the drawings set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed, and this description is intended to include all such aspects and their equivalents.
- FIG. 1 is a general illustration of a scene suitable for implementing methods for controlling one or more electronic devices by recognition of gestures made by an object.
- FIG. 2 shows an example system environment suitable for implementing methods for controlling one or more electronic devices by recognition of gestures made by an object.
- FIG. 3 shows an example embodiment of the 3D-sensor.
- FIG. 4 is a diagram of the computing unit, according to an example embodiment.
- FIG. 5 is a process flow diagram showing a method for controlling one or more electronic devices by recognition of gestures made by the object, according to an example embodiment.
- FIG. 6 is a diagrammatic representation of an example machine in the form of a computer system within which a set of instructions for the machine to perform any one or more of the methodologies discussed herein is executed.
- the techniques of the embodiments disclosed herein may be implemented using a variety of technologies.
- the methods described herein may be implemented in software executing on a computer system or in hardware utilizing either a combination of microprocessors or other specially designed application-specific integrated circuits (ASICs), programmable logic devices, or various combinations thereof.
- the methods described herein may be implemented by a series of computer-executable instructions residing on a storage medium such as a carrier wave, disk drive, or computer-readable medium.
- Exemplary forms of carrier waves may take the form of electrical, electromagnetic, or optical signals conveying digital data streams along a local network or a publicly accessible network such as the Internet.
- gestures can be made by users with either their hands (arms or fingers) or handheld, elongated objects.
- gestures can be made by handheld objects in combination with motions of arms, fingers, or other body parts of the user.
- one or more 3D-sensors or video cameras can be used to recognize gestures.
- various techniques for gesture identification and recognition can be used, and accordingly, various devices can be utilized.
- a single 3D-sensor can be used and may include an IR projector, an IR camera, and an optional color video camera all embedded within a single housing.
- Image processing and interpretation can be performed by any computing device coupled to or embedding the 3D-sensor. Some examples may include a tabletop computer, laptop, tablet computer, gaming console, audio system, video system, phone, smart phone, PDA, or any other wired or wireless electronic device. Based on image processing and interpretation, a particular control command can be generated and outputted by the computing device.
- the computing device may recognize a particular gesture associated with a predetermined command and generate such command for further input into a particular electronic device selected from a plurality of electronic devices. For instance, one command generated by the computing device and associated with a first gesture can be inputted to a gaming console, while another command, being associated with a second gesture, can be inputted to an audio-system.
- the computing device can be coupled to multiple electronic devices of same or various types, and such electronic devices can be selectively controlled by the user.
- the computing device may be integrated with one or more controlled electronic device(s).
- the computing device and optional 3D-sensor can be integrated with a gaming console.
- This gaming console can be configured to be coupled to other electronic devices such as a lighting device, audio system, video system, TV set, and so forth.
- the 3D-sensor, the computing device, and various controlled electronic devices can be integrated with each other or interconnected numerous different ways. It should also be understood that such systems may construe at least some parts of “intelligent house” and may be used as part of home automation systems.
- the user should perform two actions either concurrently or in series.
- the first action includes pointing the object towards the particular device. This may include posing the elongated object such that it is substantially oriented towards the particular device to be controlled. For example, the user may show the device with the pointer finger. Alternatively, the user may orient an arm or hand towards the device. In some other examples, the user may orient the handheld object (e.g., a wand) towards the electronic device.
- any elongated object can be used to designate a particular electronic device for further action. Such elongated object may or may not include electronic components.
- the user should perform the second action, which as used herein is referred to as a “qualifying action.”
- a predetermined control command is generated for a desired electronic device.
- the qualifying action may include one or more different actions.
- the qualifying action may refer to a predetermined motion or gesture made by the object. For example, the user may first point to an electronic device to “select” it, and then make a certain gesture (e.g., make a circle motion, a nodding motion, or any other predetermined motion or gesture) to trigger the generation and output of a control command associated with the recognized gesture and “selected” electronic device.
- the qualifying action may include a predetermined motion or gesture of the user.
- the user may at first point to an electronic device, and then make a certain gesture by hand or head (e.g., the user may knock a pointer finger onto a wand held in the arm).
- the qualifying action may include a gaze of the user towards the predetermined direction associated with one or more electronic devices.
- the user may point to an electronic device while also looking at it.
- the qualifying action may refer to a voice command generated by the user.
- the user may point to a TV set and say, “turn on,” to generate a turn on command.
- the qualifying action may include receipt and identification of biometric data associated with the user.
- the biometric data may include a face, a voice, motion dynamics pattern, and so forth. For example, face recognition or voice recognition can be used to authorize the user to control certain electronic devices.
- the interface system may require the user to perform at least two or more qualifying actions. For example, to generate a particular control command for an electronic device, the user shall first use the object to point out the electronic device , then make a predetermined gesture using the object, and then provide a voice command. In another example, the user may point towards the electronic device, make a gesture, and turn the face towards the 3D sensor for further face recognition and authentication. It should be understood that various combinations of qualifying actions can be performed and predetermined for generation of a particular command.
- the interface system may include a database of predetermined gestures, objects and related information. Once a gesture is captured by the 3D-sensor, the computing device may compare the captured gesture with the list of predetermined gestures to find the match. Based on such comparison, a predetermined command can be generated. Accordingly, the database may store and populate a list of predetermined commands, each of which is associated with a particular device and a particular qualifying action (or combination of qualifying actions). It should also be understood that locations of various electronic devices can be pre-programmed in the system, or alternatively, they can be identified by the 3D sensor in real time. For this purpose, the electronic devices can be provided with tags to be attached to their surfaces. Those skilled in the art would appreciate that various techniques can be used to identify electronic devices for the interface system.
- FIG. 1 is a general illustration of scene 100 suitable for implementing methods for controlling one or more electronic device(s) by recognition of gestures made by an object.
- FIG. 1 shows a user 102 holding a handheld elongated object 104 , which can be used for interaction with an interface system 106 .
- the interface system 106 may include both a 3D-sensor 108 and a computing unit 110 , which can be stand-alone devices or can be embedded within a single housing.
- the 3D-sensor 108 can be configured to capture a series of 3D images, which can be further transmitted to and processed by the computing unit 110 .
- the computing unit 110 may first identify the object 104 and its relative orientation in a certain direction, and second, identify one or more “qualifying actions” as discussed above (e.g., identify a gesture made by the user 102 or the object 104 ).
- the interface system 106 may be operatively connected with various electronic devices 112 - 118 .
- the electronic devices 112 - 118 may include any device capable of receiving electronic control commands and performing one or more certain actions upon receipt of such commands.
- the electronic devices 112 - 118 may include desktop computers, laptops, tabletop computers, tablet computers, cellular phones, smart phones, PDAs, gaming consoles, TV sets, TV adapters, displays, audio systems, video systems, lighting devices, home appliances, or any combination or part thereof.
- FIG. 1 there is a TV set 112 , an audio system 114 , a gaming console 116 , and a lighting device 118 .
- the electronic devices 112 - 118 are all operatively coupled to the interface system 106 , as further depicted in FIG. 2 .
- the interface system 106 may integrate one or more electronic devices (not shown).
- the interface system 106 may be embedded in a gaming console 116 or desktop computer.
- Those skilled in the art should understand that various interconnections may be deployed for the devices 112 - 118 .
- the user 102 may interact with the interface system 106 by making gestures or various motions with his or her hands, arms, fingers, legs, head, or other body parts; by making gestures or motions using the object 104 ; or by making voice commands; by looking in a certain direction; or any combination thereof. All of these motions, gestures, and voice commands can be predetermined so that the interface system 106 is able to identify them, match them to the list of pre-stored user commands, and generate a particular command for electronic devices 112 - 118 . In other words, the interface system 106 may be “taught” to identify and differentiate one or more motions or gestures.
- the object 104 may be any device of elongated shape and design.
- One example of the object 104 may include a wand or elongated pointing device. It is important to note that the object 104 may be free of any electronics. It could be any article of prolonged shape.
- the interface system 106 may be trained to identify and differentiate the object 104 as used by the user 102 .
- the electronics-free object 104 may have a different design and may imitate various sporting equipment (e.g., a baseball bat, racket, machete, sword, steering wheel, and so forth).
- the object 104 may have a specific color design or color tags. Such color tags or colored areas may have various designs and shapes, and in general, they may help facilitate better identification of the object 104 by the interface system 106 .
- FIG. 2 shows an example system environment 200 suitable for implementing methods for controlling one or more electronic device(s) by recognition of gestures made by an object.
- the system environment 200 comprises the interface system 106 , one or more electronic devices 210 , and a network 220 .
- the interface system 106 may include at least one 3D-sensor 108 , the computing unit 110 , a communication unit 230 , and an optional input unit 240 . All of these units 108 , 110 , 230 , and 240 can be operatively interconnected.
- the 3D-sensor 108 may be implemented differently and may include an image capture device. Further details about the 3D-sensor 108 are documented below, with reference to FIG. 3 . It should also be appreciated that the interface system 106 may include two or more 3D-sensors 108 spaced apart from each other.
- the aforementioned one or more electronic device(s) 210 are, in general, any device configured to trigger one or more predefined action(s) upon receipt of a certain control command.
- Some examples of electronic devices 210 include, but are not limited to computers, displays, audio systems, video systems, gaming consoles, and lighting devices.
- the system environment 200 may comprise multiple electronic devices 210 of different types, while in another embodiment, the multiple electronic devices 210 may be of the same type (e.g., two or more interconnected gaming consoles are used).
- the communication unit 230 may be configured to transfer data between the interface system 106 and one or more electronic device(s) 210 .
- the communication unit 230 may include any wireless or wired network interface controller, including, for example, a Local Area Network (LAN) adapter, Wide Area Network (WAN) adapter, Wireless Transmit Receiving Unit (WTRU), WiFi adapter, Bluetooth adapter, GSM/CDMA adapter, and so forth.
- LAN Local Area Network
- WAN Wide Area Network
- WTRU Wireless Transmit Receiving Unit
- WiFi adapter Wireless Transmit Receiving Unit
- Bluetooth adapter Bluetooth adapter
- GSM/CDMA adapter GSM/CDMA adapter
- the input unit 240 may be configured to enable users to input data of any nature.
- the input unit 240 may include a keyboard or ad hoc buttons allowing the users to input commands, program an interface, customize settings, and so forth.
- the input unit 240 includes a microphone to capture user voice commands, which can then be processed by the computing unit 110 .
- Various different input technologies can be used in the input unit 240 , including touch screen technologies, pointing devices, and so forth.
- the network 220 may couple the interface system 106 and one or more electronic device(s) 210 .
- the network 220 is a network of data processing nodes interconnected for the purpose of data communication and may be utilized to communicatively couple various components of the environment 200 .
- the network 220 may include the Internet or any other network capable of communicating data between devices.
- Suitable networks may include or interface with any one or more of the following: local intranet, PAN (Personal Area Network), LAN, WAN, MAN (Metropolitan Area Network), virtual private network (VPN), storage area network (SAN), frame relay connection, Advanced Intelligent Network (AIN) connection, synchronous optical network (SONET) connection, digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, Ethernet connection, ISDN (Integrated Services Digital Network) line, dial-up port such as a V.90, V.34 or V.34b is analog modem connection, cable modem, ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface), or CDDI (Copper Distributed Data Interface) connection.
- PAN Personal Area Network
- LAN Local Area Network
- WAN Wide Area Network
- VPN virtual private network
- SAN storage area network
- SONET synchronous optical network
- DDS Digital Data Service
- DSL Digital Subscriber Line
- Ethernet connection Ethernet
- communications may also include links to any of a variety of wireless networks including, WAP (Wireless Application Protocol), GPRS (General Packet Radio Service), GSM, CDMA or TDMA (Time Division Multiple Access), cellular phone networks, GPS, CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network.
- the network 220 can further include or interface with any one or more of the following: RS-232 serial connection, IEEE-1394 (Firewire) connection, Fiber Channel connection, IrDA (infrared) port, SCSI (Small Computer Systems Interface) connection, USB (Universal Serial Bus) connection, or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking.
- FIG. 3 shows an example embodiment of the 3D-sensor 108 .
- the 3D-sensor 108 may comprise at least a color video camera 310 configured to capture images.
- the 3D-sensor 108 may include an IR projector 320 to generate modulated light and also an IR camera 330 to capture 3D images associated with the object 104 or the user 102 .
- the 3D-sensor 108 may comprise the color video camera 310 , IR projector 320 , and IR camera 330 .
- the color video camera 310 , IR projector 320 , and IR camera 330 are all encased within a single housing.
- the 3D-sensor 108 may also comprise a computing module 340 for image analysis, pre-processing, processing, or generation of commands for the color video camera 310 , IR projector 320 , or IR camera 330 . In some other examples, such operations can be done by the computing unit 110 .
- the 3D-sensor 108 may also include a bus 350 interconnecting the color video camera 310 , IR projector 320 , and/or IR camera 330 , depending on which devices are used.
- the 3D-sensor 108 may also include one or more liquid lenses 360 , which can be used for the color video camera 310 , IR camera 330 , or both.
- liquid lenses 360 can be used to adaptively focus cameras onto a certain object or objects.
- the liquid lens 360 may use one or more fluids to create an infinitely-variable lens without any moving parts, by controlling the meniscus (the surface of the liquid.)
- the control of the liquid lens 360 may be performed by the computing module 340 or the computing unit 110 .
- FIG. 4 is a diagram of the computing unit 110 , according to an example embodiment.
- the computing unit 110 may comprise an identification module 410 , orientation module 420 , qualifying action module 430 , comparing module 440 , command generator 450 , and database 460 .
- the computing unit 110 may include additional, fewer, or different modules for various applications.
- all modules can be integrated within a single system, or alternatively, can be remotely located and optionally accessed via a third party.
- the identification module 410 can be configured to identify the object 104 and/or the user 102 .
- the identification process may include processing the series of successive 3D images as captured by the 3D-sensor 108 .
- a depth map is generated as the result of such processing. Further processing of the depth map enables the determination of geometrical parameters of the object 104 or the user 102 . For example, a virtual hull or skeleton can be created.
- the object 104 can be identified by matching these geometrical parameters to predetermined objects as stored in the database 460 .
- the orientation module 420 can be configured to identify that the object 104 is oriented substantially towards a predetermined direction. More specifically, the orientation module 420 can track movements of the object 104 so as to identify that the object 104 is oriented towards a certain direction for a predetermined period of time. Such certain directions can be associated with the electronic devices 210 or the interface system 106 . It should be understood that the position of the electronic devices 210 can be preliminarily stored in the database 460 , or the interface system 106 can be trained to identify and store locations associated with the electronic devices 210 . In some embodiments, the interface system 106 can obtain and store images of various electronic devices 210 such that in future, they can be easily identified.
- the electronic devices 210 can be provided with tags (e.g., color tags, RFID tags, bar code tags, and so forth). Once they are identified, the interface system 106 can associate the tags with certain locations in a 3D space. Those skilled in the art would appreciate that various approaches can be used to identify the electronic devices 210 and their associated locations, so that the orientation of the object 104 towards such locations can be easily identified by the interface system 106 .
- tags e.g., color tags, RFID tags, bar code tags, and so forth.
- the qualifying action module 430 can be configured to track motions of the user 102 or the object 104 , and determine at least one qualifying action being performed by the user 102 or the object 104 .
- the qualifying action may include one or more different actions.
- the qualifying action may refer to a predetermined motion or gesture made by the object 104 .
- a nodding motion or a circle motion can be considered as a qualifying action.
- the qualifying action may include a predetermined motion or gesture of the user 102 .
- the user 102 may perform a gesture by the hand or head. There are no restrictions to such gestures. The only requirement is that the interface system 106 should be able to differentiate and identify them.
- the interface system 106 can store previously stored reference motions or reference gestures in the database 460 .
- the interface system 106 can be trained by performing various motions and gestures, such that the sample motions and gestures can be stored in the database 460 for further comparison with gestures captured in real time.
- the qualifying action may include a gaze of the user 102 towards the predetermined direction associated with one or more electronic devices 210 .
- the gaze of the user 102 can be determined based on one or more of the following: position of the eyes of the user 102 , position of the pupils or a contour of the irises of the eyes of the user 102 , position of the head of the user 102 , angle of inclination of the head of the user 102 , and a rotation of the head of the user 102 .
- the qualifying action may include a voice command generated by the user 102 .
- Voice commands can be captured by the input unit 240 and processed by the computing unit 110 in order to recognize the command and compare it to a predetermined list of voice commands to find a match.
- the qualifying action may include receipt and identification of biometric data associated with the user 102 .
- the biometric data may include a face, voice, motion dynamics patterns, and so forth.
- the computing unit 110 may authenticate a particular user 102 to control one or another electronic device 210 . Such a feature may prevent, for example, children from operating dangerous or unwanted electronic devices 210 .
- the interface system 106 may require that the user 102 perform at least two or more qualifying actions. For example, to generate a particular control command for an electronic device 210 , the user 102 shall first point to a certain electronic device 210 using the object 104 , then make a predetermined gesture using the object 104 , and thirdly, provide a voice command or perform another gesture. Given that various qualifying actions are provided, it should be understood that multiple combinations can be used to generate different control commands.
- the comparing module 440 can be configured to compare the captured qualifying action to one or more predetermined actions being associated with the direction to which the object 104 was oriented (as defined by the orientation module 420 ).
- the aforementioned one or more predetermined actions can be stored in the database 460 .
- the command generator 450 can be configured to selectively provide to one or more electronic devices 210 , based on the comparison performed by the comparing module 440 , a command associated with at least one qualifying action that is identified by the qualifying action module 430 . Accordingly, each control command, among a plurality of control commands stored in the database 460 , is predetermined for a certain electronic device 210 and is associated with one or more certain gestures or motions and the location of the electronic device 210 .
- the database 460 can be configured to store predetermined gestures, motions, qualifying actions, voice commands, control commands, electronic device location data, visual hulls or representations of users and objects, and so forth.
- FIG. 5 is a process flow diagram showing a method 500 for controlling one or more electronic devices 210 by recognition of gestures made by the object 104 , according to an example embodiment.
- the method 500 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both.
- the processing logic resides at the interface system 106 .
- the method 500 can be performed by the various modules discussed above with reference to FIGS. 2 and 4 .
- Each of these modules can comprise processing logic. It will be appreciated by one of ordinary skill in the art that examples of the foregoing modules may be virtual, and instructions said to be executed by a module may, in fact, be retrieved and executed by a processor.
- the foregoing modules may also include memory cards, servers, and/or computer discs. Although various modules may be configured to perform some or all of the various steps described herein, fewer or more modules may be provided and still fall within the scope of example embodiments.
- the method 500 may commence at operation 510 , with the one or more 3D-sensors 108 capturing a series of successive 3D images in real time.
- the identification module 410 identifies the object 104 .
- the orientation module 420 identifies the orientation of the object 104 and that the object 104 is oriented substantially towards a predetermined direction associated with a particular electronic device 210 . Accordingly, the orientation module 420 may track motion of the object 104 in real time.
- the qualifying action module 430 determines that at least one qualifying action is performed by the user 102 and/or the object 104 .
- the qualifying action module 430 may track motions of the user 102 and/or the object 104 in real time.
- the comparing module 440 compares the qualifying action, as identified at operation 540 , to one or more predetermined actions associated with the direction that was identified at operation 530 as the direction towards which the object is sustainably oriented.
- the command generator 450 selectively provides, to the one or more electronic devices 210 , a control command associated with at least one qualifying action and based on the comparison performed at operation 550 .
- FIG. 6 shows a diagrammatic representation of a computing device for a machine in the example electronic form of a computer system 600 , within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed.
- the machine operates as a standalone device or can be connected (e.g., networked) to other machines.
- the machine can operate in the capacity of a server, a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
- the machine can be a personal computer (PC), tablet PC, set-top box (STB), PDA, cellular telephone, portable music player (e.g., a portable hard drive audio device, such as a Moving Picture Experts Group Audio Layer 3 (MP3) player), web appliance, network router, switch, bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- portable music player e.g., a portable hard drive audio device, such as a Moving Picture Experts Group Audio Layer 3 (MP3) player
- MP3 Moving Picture Experts Group Audio Layer 3
- the example computer system 600 includes a processor or multiple processors 602 (e.g., a central processing unit (CPU), graphics processing unit (GPU), or both), main memory 604 and static memory 606 , which communicate with each other via a bus 608 .
- the computer system 600 can further include a video display unit 610 (e.g., a liquid crystal display (LCD) or cathode ray tube (CRT)).
- the computer system 700 also includes at least one input device 612 , such as an alphanumeric input device (e.g., a keyboard), cursor control device (e.g., a mouse), microphone, digital camera, video camera, and so forth.
- the computer system 600 also includes a disk drive unit 614 , signal generation device 616 (e.g., a speaker), and network interface device 618 .
- the disk drive unit 614 includes a computer-readable medium 620 , which stores one or more sets of instructions and data structures (e.g., instructions 622 ) embodying or utilized by any one or more of the methodologies or functions described herein.
- the instructions 622 can also reside, completely or at least partially, within the main memory 604 and/or within the processors 602 during execution by the computer system 600 .
- the main memory 604 and the processors 602 also constitute machine-readable media.
- the instructions 622 can further be transmitted or received over the network 220 via the network interface device 618 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP), CAN, Serial, and Modbus).
- HTTP Hyper Text Transfer Protocol
- CAN Serial
- Modbus any one of a number of well-known transfer protocols
- While the computer-readable medium 620 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
- the term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine, and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions.
- the term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media. Such media can also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like.
- the example embodiments described herein can be implemented in an operating environment comprising computer-executable instructions (e.g., software) installed on a computer, in hardware, or in a combination of software and hardware.
- the computer-executable instructions can be written in a computer programming language or can be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions can be executed on a variety of hardware platforms and for interfaces to a variety of operating systems.
- computer software programs for implementing the present method can be written in any number of suitable programming languages such as, for example, C, C++, C#, Cobol, Eiffel, Haskell, Visual Basic, Java, JavaScript, Python, or other compilers, assemblers, interpreters or other computer languages or platforms.
Abstract
Description
- This application is Continuation-in-Part of Russian Patent Application Serial No. 2011127116, filed Jul. 4, 2011, which is incorporated herein by reference in its entirety for all purposes.
- 1. Technical Field
- This disclosure relates generally to computer interfaces and, more particularly, to methods for controlling electronic equipment by recognition of gestures made by an object.
- 2. Description of Related Art
- The approaches described in this section could be pursued but are not necessarily approaches that have previously been conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art, merely by virtue of their inclusion in this section.
- Interactive gesture interface systems are commonly used to interact with various electronic devices including gaming consoles, Television (TV) sets, computers, and so forth. The general principle of such systems is to detect human gestures or motions made by users, and generate commands based thereupon that cause electronic devices to perform certain actions. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. However, some systems also include emotion recognition features.
- The gesture interface systems may be based on various gesture recognition approaches that involve the utilization of cameras, moving sensors, acceleration sensors, position sensors, electronic handheld controllers, and so forth. Whichever approach is used, human gestures can be captured and recognized, and a particular action can be triggered by an electronic device. Particular examples may include wireless electronic handheld controllers, which enable users to control gaming consoles by detecting motions or gestures made by such controllers. While such systems became very popular, they are still quite complex and require the utilization of various handheld controllers that are typically different for different applications.
- Another approach involves utilization of 3D sensor devices capable of recognizing users' gestures or motions without dedicated handheld controllers or the like. Gestures are identified by processing users' images obtained by such 3D-sensors, and then they are interpreted to generate a control command. Control commands can be used to trigger particular actions performed by electronic equipment coupled to the 3D-sensor. Such systems are now widely deployed and generally used for gaming consoles.
- One of the major drawbacks of such systems is that they are not flexible and cannot generate control commands for multiple electronic devices concurrently connected to a single 3D-sensor or any other device for capturing human motions or gestures. Thus, the conventional technology fails to provide a technique for improved detection and interpretation of human gestures associated with a particular electronic device among a plurality of devices connected to the common 3D-sensor.
- This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
- In accordance with various embodiments and the corresponding disclosure thereof, methods and systems for controlling one or more electronic devices by recognition of gestures made by an object. The described methodologies enable users to interact with one or a plurality of electronic devices such as gaming consoles, computers, audio systems, video systems, and so forth. The interaction with various electronic devices can be performed with the help of at least one 3D-sensor being configured to recognize not only gestures, but also a particular electronic device among the plurality of electronic devices to which the gestures are dedicated.
- In accordance with one aspect, there is provided a computer-implemented method for controlling one or more electronic devices by recognition of gestures made by an object. The method may comprise capturing a series of successive 3D images in real time and identifying the object. The object may have a predetermined elongated shape. The method may also comprise identifying that the object is oriented substantially towards a predetermined direction, determining at least one qualifying action being performed by an user and/or the object, comparing the at least one qualifying action to one or more predetermined actions associated with the direction to which the object is oriented towards, and, based on the comparison, selectively providing to the one or more electronic devices a command associated with the at least one qualifying action.
- In some embodiments, the predetermined direction can be associated with the one or more electronic devices. The object can be selected from a group comprising a wand, an elongated pointing device, an arm, a hand, and one or more fingers of the user. The series of successive 3D images can be captured using at least one video camera or a 3D image sensor. In some examples, the object can be identified by performing one or more of: processing the captured series of successive 3D images to generate a depth map, determining geometrical parameters of the object, and identifying the object by matching the geometrical parameters to a predetermined object database. The determination of at least one qualifying action may comprise the determining and acknowledging of one or more of: a predetermined motion of the object, a predetermined gesture of the object, a gaze of the user towards the predetermined direction associated with one or more electronic devices, a predetermined motion of the user, a predetermined gesture of the user, biometric data of the user, and a voice command provided by the user. Biometric data of the user can be determined based on one or more of the following: face recognition, voice recognition, user body recognition, and recognition of a user motion dynamics pattern. The gaze of the user can be determined based on one or more of the following: a position of the eyes of the user, a position of the pupils or a contour of the irises of the eyes of the user, a position of the head of the user, an angle of inclination of the head of the user, and a rotation of the head of the user. The mentioned one or more electronic devices may comprise a computer, a game console, a TV set, a TV adapter, a communication device, a Personal Digital Assistant (PDA), a lighting device, an audio system, and a video system.
- According to another aspect, there is provided a system for controlling one or more electronic devices by recognition of gestures made by an object. The system may comprise at least one 3D image sensor configured to capture a series of successive 3D images in real time and a computing unit communicatively coupled to the at least one 3D image sensor. The computing unit can be configured to: identify the object; identify that the object is oriented substantially towards a predetermined direction; determine at least one qualifying action being performed by a user and/or the object; compare the at least one qualifying action to one or more predetermined actions associated with the direction to which the object is oriented towards; and, based on the comparison, selectively provide to the one or more electronic devices a command associated with the at least one qualifying action.
- In some example embodiments, the at least one 3D image sensor may comprise one or more of an infrared (IR) projector to generate modulated light, an IR camera to capture 3D images associated with the object or the user, and a color video camera. The IR projector, color video camera, and IR camera can be installed in a common housing. The color video camera and/or IR camera can be equipped with liquid lenses. The mentioned predetermined direction can associated with the one or more electronic devices. The object can be selected from a group comprising a wand, an elongated pointing device, an arm, a hand, and one or more fingers of the user. The computing unit can be configured to identify the object by performing the acts of: processing the captured series of successive 3D images to generate a depth map, determining geometrical parameters of the object, and identifying the object by matching the geometrical parameters to a predetermined object database. Furthermore, the computing unit can be configured to determine at least one qualifying action by performing the acts of determining and acknowledging of one or more of: a predetermined motion of the object, a predetermined gesture of the object, a gaze of the user towards the predetermined direction associated with one or more electronic devices, a predetermined motion of the user, a predetermined gesture of the user, biometric data of the user, and a voice command provided by the user. Biometric data of the user can be determined based on one or more of the following: face recognition, voice recognition, user body recognition, and recognition of user motion dynamics pattern. The gaze of the user can be determined based on one or more of the following: a position of the eyes of the user, a position of the pupils or a contour of the irises of the eyes of the user, a position of the head of the user, an angle of inclination of the head of the user, and a rotation of the head of the user.
- According to yet another aspect, there is provided a processor-readable medium. The medium may store instructions, which when executed by one or more processors, cause the one or more processors to: capture a series of successive 3D images in real time; identify the object; identify that the object is oriented substantially towards a predetermined direction; determine at least one qualifying action being performed by an user and/or the object; compare the at least one qualifying action to one or more predetermined actions associated with the direction to which the object is oriented towards; and, based on the comparison, selectively provide to the one or more electronic devices a command associated with the at least one qualifying action.
- To the accomplishment of the foregoing and related ends, the one or more aspects comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the drawings set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed, and this description is intended to include all such aspects and their equivalents.
- Embodiments are illustrated by way of example, and not limitation, in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
-
FIG. 1 is a general illustration of a scene suitable for implementing methods for controlling one or more electronic devices by recognition of gestures made by an object. -
FIG. 2 shows an example system environment suitable for implementing methods for controlling one or more electronic devices by recognition of gestures made by an object. -
FIG. 3 shows an example embodiment of the 3D-sensor. -
FIG. 4 is a diagram of the computing unit, according to an example embodiment. -
FIG. 5 is a process flow diagram showing a method for controlling one or more electronic devices by recognition of gestures made by the object, according to an example embodiment. -
FIG. 6 is a diagrammatic representation of an example machine in the form of a computer system within which a set of instructions for the machine to perform any one or more of the methodologies discussed herein is executed. - The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with example embodiments. These example embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents. In this document, the terms “a” and “an” are used, as is common in patent documents, to include one or more than one. In this document, the term “or” is used to refer to a nonexclusive “or,” such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated.
- The techniques of the embodiments disclosed herein may be implemented using a variety of technologies. For example, the methods described herein may be implemented in software executing on a computer system or in hardware utilizing either a combination of microprocessors or other specially designed application-specific integrated circuits (ASICs), programmable logic devices, or various combinations thereof. In particular, the methods described herein may be implemented by a series of computer-executable instructions residing on a storage medium such as a carrier wave, disk drive, or computer-readable medium. Exemplary forms of carrier waves may take the form of electrical, electromagnetic, or optical signals conveying digital data streams along a local network or a publicly accessible network such as the Internet.
- The embodiments described herein relate to computer-implemented methods and systems for controlling one or more electronic devices by recognition of gestures made by an object. The object, as used herein, may refer to any elongated object having prolonged shape and may include, for example, a wand, an elongated handheld pointing device, an arm, a hand, or one or more fingers of the user. Thus, according to inventive approaches described herein, gestures can be made by users with either their hands (arms or fingers) or handheld, elongated objects. In some embodiments, gestures can be made by handheld objects in combination with motions of arms, fingers, or other body parts of the user.
- In general, one or more 3D-sensors or video cameras can be used to recognize gestures. In the context of this document, various techniques for gesture identification and recognition can be used, and accordingly, various devices can be utilized. In one example embodiment, a single 3D-sensor can be used and may include an IR projector, an IR camera, and an optional color video camera all embedded within a single housing. Image processing and interpretation can be performed by any computing device coupled to or embedding the 3D-sensor. Some examples may include a tabletop computer, laptop, tablet computer, gaming console, audio system, video system, phone, smart phone, PDA, or any other wired or wireless electronic device. Based on image processing and interpretation, a particular control command can be generated and outputted by the computing device. For example, the computing device may recognize a particular gesture associated with a predetermined command and generate such command for further input into a particular electronic device selected from a plurality of electronic devices. For instance, one command generated by the computing device and associated with a first gesture can be inputted to a gaming console, while another command, being associated with a second gesture, can be inputted to an audio-system. In other words, the computing device can be coupled to multiple electronic devices of same or various types, and such electronic devices can be selectively controlled by the user.
- In some example embodiments, the computing device may be integrated with one or more controlled electronic device(s). For instance, the computing device and optional 3D-sensor can be integrated with a gaming console. This gaming console can be configured to be coupled to other electronic devices such as a lighting device, audio system, video system, TV set, and so forth. Those skilled in the art would appreciate that the 3D-sensor, the computing device, and various controlled electronic devices can be integrated with each other or interconnected numerous different ways. It should also be understood that such systems may construe at least some parts of “intelligent house” and may be used as part of home automation systems.
- To select a particular electronic device and generate a control command for such electronic device, the user should perform two actions either concurrently or in series. The first action includes pointing the object towards the particular device. This may include posing the elongated object such that it is substantially oriented towards the particular device to be controlled. For example, the user may show the device with the pointer finger. Alternatively, the user may orient an arm or hand towards the device. In some other examples, the user may orient the handheld object (e.g., a wand) towards the electronic device. In general, it should be understood that any elongated object can be used to designate a particular electronic device for further action. Such elongated object may or may not include electronic components.
- To generate a particular control command to the selected electronic device, the user should perform the second action, which as used herein is referred to as a “qualifying action.” Once the interface system identifies that the user conducts the first and the second action, a predetermined control command is generated for a desired electronic device. The qualifying action may include one or more different actions. In some embodiments, the qualifying action may refer to a predetermined motion or gesture made by the object. For example, the user may first point to an electronic device to “select” it, and then make a certain gesture (e.g., make a circle motion, a nodding motion, or any other predetermined motion or gesture) to trigger the generation and output of a control command associated with the recognized gesture and “selected” electronic device. In some other embodiments, the qualifying action may include a predetermined motion or gesture of the user. For example, the user may at first point to an electronic device, and then make a certain gesture by hand or head (e.g., the user may knock a pointer finger onto a wand held in the arm).
- Furthermore, in some other embodiments, the qualifying action may include a gaze of the user towards the predetermined direction associated with one or more electronic devices. For example, the user may point to an electronic device while also looking at it. Such a combination of actions can be unequivocally interpreted by the interface system to mean that a certain command is to be performed. The qualifying action may refer to a voice command generated by the user. For example, the user may point to a TV set and say, “turn on,” to generate a turn on command. In some embodiments, the qualifying action may include receipt and identification of biometric data associated with the user. The biometric data may include a face, a voice, motion dynamics pattern, and so forth. For example, face recognition or voice recognition can be used to authorize the user to control certain electronic devices.
- In some additional embodiments, the interface system may require the user to perform at least two or more qualifying actions. For example, to generate a particular control command for an electronic device, the user shall first use the object to point out the electronic device , then make a predetermined gesture using the object, and then provide a voice command. In another example, the user may point towards the electronic device, make a gesture, and turn the face towards the 3D sensor for further face recognition and authentication. It should be understood that various combinations of qualifying actions can be performed and predetermined for generation of a particular command.
- The interface system may include a database of predetermined gestures, objects and related information. Once a gesture is captured by the 3D-sensor, the computing device may compare the captured gesture with the list of predetermined gestures to find the match. Based on such comparison, a predetermined command can be generated. Accordingly, the database may store and populate a list of predetermined commands, each of which is associated with a particular device and a particular qualifying action (or combination of qualifying actions). It should also be understood that locations of various electronic devices can be pre-programmed in the system, or alternatively, they can be identified by the 3D sensor in real time. For this purpose, the electronic devices can be provided with tags to be attached to their surfaces. Those skilled in the art would appreciate that various techniques can be used to identify electronic devices for the interface system.
- Referring now to the drawings,
FIG. 1 is a general illustration ofscene 100 suitable for implementing methods for controlling one or more electronic device(s) by recognition of gestures made by an object. In particular,FIG. 1 shows auser 102 holding a handheldelongated object 104, which can be used for interaction with aninterface system 106. Theinterface system 106 may include both a 3D-sensor 108 and acomputing unit 110, which can be stand-alone devices or can be embedded within a single housing. - The 3D-
sensor 108 can be configured to capture a series of 3D images, which can be further transmitted to and processed by thecomputing unit 110. As a result of the image processing, thecomputing unit 110 may first identify theobject 104 and its relative orientation in a certain direction, and second, identify one or more “qualifying actions” as discussed above (e.g., identify a gesture made by theuser 102 or the object 104). - The
interface system 106 may be operatively connected with various electronic devices 112-118. The electronic devices 112-118 may include any device capable of receiving electronic control commands and performing one or more certain actions upon receipt of such commands. For example, the electronic devices 112-118 may include desktop computers, laptops, tabletop computers, tablet computers, cellular phones, smart phones, PDAs, gaming consoles, TV sets, TV adapters, displays, audio systems, video systems, lighting devices, home appliances, or any combination or part thereof. According to the example shown inFIG. 1 , there is aTV set 112, anaudio system 114, agaming console 116, and alighting device 118. The electronic devices 112-118 are all operatively coupled to theinterface system 106, as further depicted inFIG. 2 . In some example embodiments, theinterface system 106 may integrate one or more electronic devices (not shown). For example, theinterface system 106 may be embedded in agaming console 116 or desktop computer. Those skilled in the art should understand that various interconnections may be deployed for the devices 112-118. - The
user 102 may interact with theinterface system 106 by making gestures or various motions with his or her hands, arms, fingers, legs, head, or other body parts; by making gestures or motions using theobject 104; or by making voice commands; by looking in a certain direction; or any combination thereof. All of these motions, gestures, and voice commands can be predetermined so that theinterface system 106 is able to identify them, match them to the list of pre-stored user commands, and generate a particular command for electronic devices 112-118. In other words, theinterface system 106 may be “taught” to identify and differentiate one or more motions or gestures. - The
object 104 may be any device of elongated shape and design. One example of theobject 104 may include a wand or elongated pointing device. It is important to note that theobject 104 may be free of any electronics. It could be any article of prolonged shape. Although it is not described in this document (so as not to take away from the general principles), theinterface system 106 may be trained to identify and differentiate theobject 104 as used by theuser 102. The electronics-free object 104 may have a different design and may imitate various sporting equipment (e.g., a baseball bat, racket, machete, sword, steering wheel, and so forth). In some embodiments, theobject 104 may have a specific color design or color tags. Such color tags or colored areas may have various designs and shapes, and in general, they may help facilitate better identification of theobject 104 by theinterface system 106. -
FIG. 2 shows anexample system environment 200 suitable for implementing methods for controlling one or more electronic device(s) by recognition of gestures made by an object. Thesystem environment 200 comprises theinterface system 106, one or moreelectronic devices 210, and anetwork 220. - The
interface system 106 may include at least one 3D-sensor 108, thecomputing unit 110, acommunication unit 230, and anoptional input unit 240. All of theseunits sensor 108 may be implemented differently and may include an image capture device. Further details about the 3D-sensor 108 are documented below, with reference toFIG. 3 . It should also be appreciated that theinterface system 106 may include two or more 3D-sensors 108 spaced apart from each other. - The aforementioned one or more electronic device(s) 210 are, in general, any device configured to trigger one or more predefined action(s) upon receipt of a certain control command. Some examples of
electronic devices 210 include, but are not limited to computers, displays, audio systems, video systems, gaming consoles, and lighting devices. In one embodiment, thesystem environment 200 may comprise multipleelectronic devices 210 of different types, while in another embodiment, the multipleelectronic devices 210 may be of the same type (e.g., two or more interconnected gaming consoles are used). - The
communication unit 230 may be configured to transfer data between theinterface system 106 and one or more electronic device(s) 210. Thecommunication unit 230 may include any wireless or wired network interface controller, including, for example, a Local Area Network (LAN) adapter, Wide Area Network (WAN) adapter, Wireless Transmit Receiving Unit (WTRU), WiFi adapter, Bluetooth adapter, GSM/CDMA adapter, and so forth. - The
input unit 240 may be configured to enable users to input data of any nature. In one example, theinput unit 240 may include a keyboard or ad hoc buttons allowing the users to input commands, program an interface, customize settings, and so forth. According to another example, theinput unit 240 includes a microphone to capture user voice commands, which can then be processed by thecomputing unit 110. Various different input technologies can be used in theinput unit 240, including touch screen technologies, pointing devices, and so forth. - The
network 220 may couple theinterface system 106 and one or more electronic device(s) 210. Thenetwork 220 is a network of data processing nodes interconnected for the purpose of data communication and may be utilized to communicatively couple various components of theenvironment 200. Thenetwork 220 may include the Internet or any other network capable of communicating data between devices. Suitable networks may include or interface with any one or more of the following: local intranet, PAN (Personal Area Network), LAN, WAN, MAN (Metropolitan Area Network), virtual private network (VPN), storage area network (SAN), frame relay connection, Advanced Intelligent Network (AIN) connection, synchronous optical network (SONET) connection, digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, Ethernet connection, ISDN (Integrated Services Digital Network) line, dial-up port such as a V.90, V.34 or V.34b is analog modem connection, cable modem, ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface), or CDDI (Copper Distributed Data Interface) connection. Furthermore, communications may also include links to any of a variety of wireless networks including, WAP (Wireless Application Protocol), GPRS (General Packet Radio Service), GSM, CDMA or TDMA (Time Division Multiple Access), cellular phone networks, GPS, CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network. Thenetwork 220 can further include or interface with any one or more of the following: RS-232 serial connection, IEEE-1394 (Firewire) connection, Fiber Channel connection, IrDA (infrared) port, SCSI (Small Computer Systems Interface) connection, USB (Universal Serial Bus) connection, or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking. -
FIG. 3 shows an example embodiment of the 3D-sensor 108. In some embodiments, the 3D-sensor 108 may comprise at least acolor video camera 310 configured to capture images. In some other embodiments, the 3D-sensor 108 may include anIR projector 320 to generate modulated light and also anIR camera 330 to capture 3D images associated with theobject 104 or theuser 102. In yet more exemplary embodiments, the 3D-sensor 108 may comprise thecolor video camera 310,IR projector 320, andIR camera 330. In an example, thecolor video camera 310,IR projector 320, andIR camera 330 are all encased within a single housing. - Furthermore, in some embodiments, the 3D-
sensor 108 may also comprise acomputing module 340 for image analysis, pre-processing, processing, or generation of commands for thecolor video camera 310,IR projector 320, orIR camera 330. In some other examples, such operations can be done by thecomputing unit 110. The 3D-sensor 108 may also include abus 350 interconnecting thecolor video camera 310,IR projector 320, and/orIR camera 330, depending on which devices are used. - The 3D-
sensor 108 may also include one or moreliquid lenses 360, which can be used for thecolor video camera 310,IR camera 330, or both. In general,liquid lenses 360 can be used to adaptively focus cameras onto a certain object or objects. Theliquid lens 360 may use one or more fluids to create an infinitely-variable lens without any moving parts, by controlling the meniscus (the surface of the liquid.) The control of theliquid lens 360 may be performed by thecomputing module 340 or thecomputing unit 110. - Additional details of the 3D-
sensor 108 and how captured image data can be processed are disclosed in the Russian patent application serial number 2011127116, which is incorporated herein by reference in its entirety. -
FIG. 4 is a diagram of thecomputing unit 110, according to an example embodiment. As shown in the figure, thecomputing unit 110 may comprise anidentification module 410,orientation module 420,qualifying action module 430, comparingmodule 440,command generator 450, anddatabase 460. In other embodiments, thecomputing unit 110 may include additional, fewer, or different modules for various applications. Furthermore, all modules can be integrated within a single system, or alternatively, can be remotely located and optionally accessed via a third party. - The
identification module 410 can be configured to identify theobject 104 and/or theuser 102. The identification process may include processing the series of successive 3D images as captured by the 3D-sensor 108. A depth map is generated as the result of such processing. Further processing of the depth map enables the determination of geometrical parameters of theobject 104 or theuser 102. For example, a virtual hull or skeleton can be created. Once geometrical parameters are defined, theobject 104 can be identified by matching these geometrical parameters to predetermined objects as stored in thedatabase 460. - The
orientation module 420 can be configured to identify that theobject 104 is oriented substantially towards a predetermined direction. More specifically, theorientation module 420 can track movements of theobject 104 so as to identify that theobject 104 is oriented towards a certain direction for a predetermined period of time. Such certain directions can be associated with theelectronic devices 210 or theinterface system 106. It should be understood that the position of theelectronic devices 210 can be preliminarily stored in thedatabase 460, or theinterface system 106 can be trained to identify and store locations associated with theelectronic devices 210. In some embodiments, theinterface system 106 can obtain and store images of variouselectronic devices 210 such that in future, they can be easily identified. In some further embodiments, theelectronic devices 210 can be provided with tags (e.g., color tags, RFID tags, bar code tags, and so forth). Once they are identified, theinterface system 106 can associate the tags with certain locations in a 3D space. Those skilled in the art would appreciate that various approaches can be used to identify theelectronic devices 210 and their associated locations, so that the orientation of theobject 104 towards such locations can be easily identified by theinterface system 106. - The
qualifying action module 430 can be configured to track motions of theuser 102 or theobject 104, and determine at least one qualifying action being performed by theuser 102 or theobject 104. As mentioned, the qualifying action may include one or more different actions. In some embodiments, the qualifying action may refer to a predetermined motion or gesture made by theobject 104. For example, a nodding motion or a circle motion can be considered as a qualifying action. In some other embodiments, the qualifying action may include a predetermined motion or gesture of theuser 102. For example, theuser 102 may perform a gesture by the hand or head. There are no restrictions to such gestures. The only requirement is that theinterface system 106 should be able to differentiate and identify them. Accordingly, it should be understood that theinterface system 106 can store previously stored reference motions or reference gestures in thedatabase 460. In some embodiments, theinterface system 106 can be trained by performing various motions and gestures, such that the sample motions and gestures can be stored in thedatabase 460 for further comparison with gestures captured in real time. - In some other embodiments, the qualifying action may include a gaze of the
user 102 towards the predetermined direction associated with one or moreelectronic devices 210. The gaze of theuser 102 can be determined based on one or more of the following: position of the eyes of theuser 102, position of the pupils or a contour of the irises of the eyes of theuser 102, position of the head of theuser 102, angle of inclination of the head of theuser 102, and a rotation of the head of theuser 102. - In some additional embodiments, the qualifying action may include a voice command generated by the
user 102. Voice commands can be captured by theinput unit 240 and processed by thecomputing unit 110 in order to recognize the command and compare it to a predetermined list of voice commands to find a match. - In some additional embodiments, the qualifying action may include receipt and identification of biometric data associated with the
user 102. The biometric data may include a face, voice, motion dynamics patterns, and so forth. Based on the captured biometrics data, thecomputing unit 110 may authenticate aparticular user 102 to control one or anotherelectronic device 210. Such a feature may prevent, for example, children from operating dangerous or unwantedelectronic devices 210. - In some embodiments, the
interface system 106 may require that theuser 102 perform at least two or more qualifying actions. For example, to generate a particular control command for anelectronic device 210, theuser 102 shall first point to a certainelectronic device 210 using theobject 104, then make a predetermined gesture using theobject 104, and thirdly, provide a voice command or perform another gesture. Given that various qualifying actions are provided, it should be understood that multiple combinations can be used to generate different control commands. - The comparing
module 440 can be configured to compare the captured qualifying action to one or more predetermined actions being associated with the direction to which theobject 104 was oriented (as defined by the orientation module 420). The aforementioned one or more predetermined actions can be stored in thedatabase 460. - The
command generator 450 can be configured to selectively provide to one or moreelectronic devices 210, based on the comparison performed by the comparingmodule 440, a command associated with at least one qualifying action that is identified by thequalifying action module 430. Accordingly, each control command, among a plurality of control commands stored in thedatabase 460, is predetermined for a certainelectronic device 210 and is associated with one or more certain gestures or motions and the location of theelectronic device 210. - The
database 460 can be configured to store predetermined gestures, motions, qualifying actions, voice commands, control commands, electronic device location data, visual hulls or representations of users and objects, and so forth. -
FIG. 5 is a process flow diagram showing amethod 500 for controlling one or moreelectronic devices 210 by recognition of gestures made by theobject 104, according to an example embodiment. Themethod 500 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both. In one example embodiment, the processing logic resides at theinterface system 106. - The
method 500 can be performed by the various modules discussed above with reference toFIGS. 2 and 4 . Each of these modules can comprise processing logic. It will be appreciated by one of ordinary skill in the art that examples of the foregoing modules may be virtual, and instructions said to be executed by a module may, in fact, be retrieved and executed by a processor. The foregoing modules may also include memory cards, servers, and/or computer discs. Although various modules may be configured to perform some or all of the various steps described herein, fewer or more modules may be provided and still fall within the scope of example embodiments. - As shown in
FIG. 5 , themethod 500 may commence atoperation 510, with the one or more 3D-sensors 108 capturing a series of successive 3D images in real time. Atoperation 520, theidentification module 410 identifies theobject 104. - At
operation 530, theorientation module 420 identifies the orientation of theobject 104 and that theobject 104 is oriented substantially towards a predetermined direction associated with a particularelectronic device 210. Accordingly, theorientation module 420 may track motion of theobject 104 in real time. - At
operation 540, thequalifying action module 430 determines that at least one qualifying action is performed by theuser 102 and/or theobject 104. For this purpose, thequalifying action module 430 may track motions of theuser 102 and/or theobject 104 in real time. - At
operation 550, the comparingmodule 440 compares the qualifying action, as identified atoperation 540, to one or more predetermined actions associated with the direction that was identified atoperation 530 as the direction towards which the object is sustainably oriented. - At
operation 560, thecommand generator 450 selectively provides, to the one or moreelectronic devices 210, a control command associated with at least one qualifying action and based on the comparison performed atoperation 550. -
FIG. 6 shows a diagrammatic representation of a computing device for a machine in the example electronic form of acomputer system 600, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed. In example embodiments, the machine operates as a standalone device or can be connected (e.g., networked) to other machines. In a networked deployment, the machine can operate in the capacity of a server, a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine can be a personal computer (PC), tablet PC, set-top box (STB), PDA, cellular telephone, portable music player (e.g., a portable hard drive audio device, such as a Moving Picture Experts Group Audio Layer 3 (MP3) player), web appliance, network router, switch, bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that use or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. - The
example computer system 600 includes a processor or multiple processors 602 (e.g., a central processing unit (CPU), graphics processing unit (GPU), or both),main memory 604 andstatic memory 606, which communicate with each other via abus 608. Thecomputer system 600 can further include a video display unit 610 (e.g., a liquid crystal display (LCD) or cathode ray tube (CRT)). The computer system 700 also includes at least oneinput device 612, such as an alphanumeric input device (e.g., a keyboard), cursor control device (e.g., a mouse), microphone, digital camera, video camera, and so forth. Thecomputer system 600 also includes adisk drive unit 614, signal generation device 616 (e.g., a speaker), andnetwork interface device 618. - The
disk drive unit 614 includes a computer-readable medium 620, which stores one or more sets of instructions and data structures (e.g., instructions 622) embodying or utilized by any one or more of the methodologies or functions described herein. Theinstructions 622 can also reside, completely or at least partially, within themain memory 604 and/or within theprocessors 602 during execution by thecomputer system 600. Themain memory 604 and theprocessors 602 also constitute machine-readable media. - The
instructions 622 can further be transmitted or received over thenetwork 220 via thenetwork interface device 618 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP), CAN, Serial, and Modbus). - While the computer-
readable medium 620 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine, and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media. Such media can also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like. - The example embodiments described herein can be implemented in an operating environment comprising computer-executable instructions (e.g., software) installed on a computer, in hardware, or in a combination of software and hardware. The computer-executable instructions can be written in a computer programming language or can be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions can be executed on a variety of hardware platforms and for interfaces to a variety of operating systems. Although not limited thereto, computer software programs for implementing the present method can be written in any number of suitable programming languages such as, for example, C, C++, C#, Cobol, Eiffel, Haskell, Visual Basic, Java, JavaScript, Python, or other compilers, assemblers, interpreters or other computer languages or platforms.
- Thus, methods and systems for controlling one or more electronic device(s) by recognition of gestures made by an object have been described. The disclosed technique provides a useful tool to enable people to interact with various electronic devices based on gestures, motions, voice commands, and gaze information.
- Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/RU2013/000188 WO2013176574A1 (en) | 2012-05-23 | 2013-03-12 | Methods and systems for mapping pointing device on depth map |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
RURU2011127116 | 2011-07-04 | ||
RU2011127116/08A RU2455676C2 (en) | 2011-07-04 | 2011-07-04 | Method of controlling device using gestures and 3d sensor for realising said method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130010207A1 true US20130010207A1 (en) | 2013-01-10 |
Family
ID=44804813
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/478,378 Active 2032-12-29 US8823642B2 (en) | 2011-07-04 | 2012-05-23 | Methods and systems for controlling devices using gestures and related 3D sensor |
US13/478,457 Abandoned US20130010207A1 (en) | 2011-07-04 | 2012-05-23 | Gesture based interactive control of electronic equipment |
US13/541,684 Abandoned US20130010071A1 (en) | 2011-07-04 | 2012-07-04 | Methods and systems for mapping pointing device on depth map |
US13/541,681 Active 2033-03-08 US8896522B2 (en) | 2011-07-04 | 2012-07-04 | User-centric three-dimensional interactive control environment |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/478,378 Active 2032-12-29 US8823642B2 (en) | 2011-07-04 | 2012-05-23 | Methods and systems for controlling devices using gestures and related 3D sensor |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/541,684 Abandoned US20130010071A1 (en) | 2011-07-04 | 2012-07-04 | Methods and systems for mapping pointing device on depth map |
US13/541,681 Active 2033-03-08 US8896522B2 (en) | 2011-07-04 | 2012-07-04 | User-centric three-dimensional interactive control environment |
Country Status (2)
Country | Link |
---|---|
US (4) | US8823642B2 (en) |
RU (1) | RU2455676C2 (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130033649A1 (en) * | 2011-08-05 | 2013-02-07 | Samsung Electronics Co., Ltd. | Method for controlling electronic apparatus based on motion recognition, and electronic apparatus applying the same |
US20130324244A1 (en) * | 2012-06-04 | 2013-12-05 | Sony Computer Entertainment Inc. | Managing controller pairing in a multiplayer game |
US9002714B2 (en) | 2011-08-05 | 2015-04-07 | Samsung Electronics Co., Ltd. | Method for controlling electronic apparatus based on voice recognition and motion recognition, and electronic apparatus applying the same |
WO2015088141A1 (en) * | 2013-12-11 | 2015-06-18 | Lg Electronics Inc. | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
WO2015130718A1 (en) * | 2014-02-27 | 2015-09-03 | Microsoft Technology Licensing, Llc | Tracking objects during processes |
US20150279369A1 (en) * | 2014-03-27 | 2015-10-01 | Samsung Electronics Co., Ltd. | Display apparatus and user interaction method thereof |
WO2016003100A1 (en) * | 2014-06-30 | 2016-01-07 | Alticast Corporation | Method for displaying information and displaying device thereof |
US20160029009A1 (en) * | 2014-07-24 | 2016-01-28 | Etron Technology, Inc. | Attachable three-dimensional scan module |
US20160088804A1 (en) * | 2014-09-29 | 2016-03-31 | King Abdullah University Of Science And Technology | Laser-based agriculture system |
US9408452B1 (en) | 2015-11-19 | 2016-08-09 | Khaled A. M. A. A. Al-Khulaifi | Robotic hair dryer holder system with tracking |
US20160310838A1 (en) * | 2015-04-27 | 2016-10-27 | David I. Poisner | Magic wand methods, apparatuses and systems |
US9692756B2 (en) | 2015-09-24 | 2017-06-27 | Intel Corporation | Magic wand methods, apparatuses and systems for authenticating a user of a wand |
WO2018084576A1 (en) * | 2016-11-03 | 2018-05-11 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
US9984686B1 (en) * | 2015-03-17 | 2018-05-29 | Amazon Technologies, Inc. | Mapping device capabilities to a predefined set |
US10122999B2 (en) * | 2015-10-27 | 2018-11-06 | Samsung Electronics Co., Ltd. | Image generating method and image generating apparatus |
US20190019515A1 (en) * | 2016-04-29 | 2019-01-17 | VTouch Co., Ltd. | Optimum control method based on multi-mode command of operation-voice, and electronic device to which same is applied |
US10328342B2 (en) | 2015-09-24 | 2019-06-25 | Intel Corporation | Magic wand methods, apparatuses and systems for defining, initiating, and conducting quests |
US10365620B1 (en) | 2015-06-30 | 2019-07-30 | Amazon Technologies, Inc. | Interoperability of secondary-device hubs |
US20190308326A1 (en) * | 2016-09-28 | 2019-10-10 | Cognex Corporation | Simultaneous kinematic and hand-eye calibration |
EP3580692A4 (en) * | 2017-02-24 | 2020-02-26 | Samsung Electronics Co., Ltd. | Vision-based object recognition device and method for controlling the same |
US10655951B1 (en) | 2015-06-25 | 2020-05-19 | Amazon Technologies, Inc. | Determining relative positions of user devices |
KR20200130006A (en) * | 2019-05-10 | 2020-11-18 | 주식회사 엔플러그 | Health care system and method using lighting device based on IoT |
US11290518B2 (en) * | 2017-09-27 | 2022-03-29 | Qualcomm Incorporated | Wireless control of remote devices through intention codes over a wireless connection |
US11331006B2 (en) | 2019-03-05 | 2022-05-17 | Physmodo, Inc. | System and method for human motion detection and tracking |
US20220253153A1 (en) * | 2021-02-10 | 2022-08-11 | Universal City Studios Llc | Interactive pepper's ghost effect system |
US11481036B2 (en) * | 2018-04-13 | 2022-10-25 | Beijing Jingdong Shangke Information Technology Co., Ltd. | Method, system for determining electronic device, computer system and readable storage medium |
US11497961B2 (en) | 2019-03-05 | 2022-11-15 | Physmodo, Inc. | System and method for human motion detection and tracking |
US11521038B2 (en) | 2018-07-19 | 2022-12-06 | Samsung Electronics Co., Ltd. | Electronic apparatus and control method thereof |
US20230024006A1 (en) * | 2015-06-12 | 2023-01-26 | Research & Business Foundation Sungkyunkwan University | Embedded system, fast structured light based 3d camera system and method for obtaining 3d images using the same |
US11732994B1 (en) | 2020-01-21 | 2023-08-22 | Ibrahim Pasha | Laser tag mobile station apparatus system, method and computer program product |
US11792189B1 (en) * | 2017-01-09 | 2023-10-17 | United Services Automobile Association (Usaa) | Systems and methods for authenticating a user using an image capture device |
US11960648B2 (en) * | 2022-04-12 | 2024-04-16 | Robert Bosch Gmbh | Method for determining a current viewing direction of a user of data glasses with a virtual retina display and data glasses |
Families Citing this family (83)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8121361B2 (en) | 2006-05-19 | 2012-02-21 | The Queen's Medical Center | Motion tracking system for real time adaptive imaging and spectroscopy |
US9414051B2 (en) | 2010-07-20 | 2016-08-09 | Memory Engine, Incorporated | Extensible authoring and playback platform for complex virtual reality interactions and immersive applications |
US9477302B2 (en) * | 2012-08-10 | 2016-10-25 | Google Inc. | System and method for programing devices within world space volumes |
US20150153715A1 (en) * | 2010-09-29 | 2015-06-04 | Google Inc. | Rapidly programmable locations in space |
US9030425B2 (en) | 2011-04-19 | 2015-05-12 | Sony Computer Entertainment Inc. | Detection of interaction with virtual object from finger color change |
EP2747641A4 (en) | 2011-08-26 | 2015-04-01 | Kineticor Inc | Methods, systems, and devices for intra-scan motion correction |
US9628843B2 (en) * | 2011-11-21 | 2017-04-18 | Microsoft Technology Licensing, Llc | Methods for controlling electronic devices using gestures |
US9310895B2 (en) | 2012-10-12 | 2016-04-12 | Microsoft Technology Licensing, Llc | Touchless input |
RU2012145783A (en) * | 2012-10-26 | 2014-05-10 | Дисплаир, Инк. | METHOD AND DEVICE FOR RIGID CONTROL FOR MULTIMEDIA DISPLAY |
CN103019586B (en) * | 2012-11-16 | 2017-03-15 | 小米科技有限责任公司 | User interface management method and device |
US9459760B2 (en) | 2012-11-16 | 2016-10-04 | Xiaomi Inc. | Method and device for managing a user interface |
US10423214B2 (en) | 2012-11-20 | 2019-09-24 | Samsung Electronics Company, Ltd | Delegating processing from wearable electronic device |
US11157436B2 (en) | 2012-11-20 | 2021-10-26 | Samsung Electronics Company, Ltd. | Services associated with wearable electronic device |
US11372536B2 (en) | 2012-11-20 | 2022-06-28 | Samsung Electronics Company, Ltd. | Transition and interaction model for wearable electronic device |
US10551928B2 (en) | 2012-11-20 | 2020-02-04 | Samsung Electronics Company, Ltd. | GUI transitions on wearable electronic device |
US10185416B2 (en) | 2012-11-20 | 2019-01-22 | Samsung Electronics Co., Ltd. | User gesture input to wearable electronic device involving movement of device |
US8994827B2 (en) | 2012-11-20 | 2015-03-31 | Samsung Electronics Co., Ltd | Wearable electronic device |
US11237719B2 (en) | 2012-11-20 | 2022-02-01 | Samsung Electronics Company, Ltd. | Controlling remote electronic device with wearable electronic device |
US9717461B2 (en) | 2013-01-24 | 2017-08-01 | Kineticor, Inc. | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
US10327708B2 (en) | 2013-01-24 | 2019-06-25 | Kineticor, Inc. | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
US9305365B2 (en) | 2013-01-24 | 2016-04-05 | Kineticor, Inc. | Systems, devices, and methods for tracking moving targets |
WO2014118768A1 (en) * | 2013-01-29 | 2014-08-07 | Opgal Optronic Industries Ltd. | Universal serial bus (usb) thermal imaging camera kit |
US9083960B2 (en) | 2013-01-30 | 2015-07-14 | Qualcomm Incorporated | Real-time 3D reconstruction with power efficient depth sensor usage |
EP2950714A4 (en) | 2013-02-01 | 2017-08-16 | Kineticor, Inc. | Motion tracking system for real time adaptive motion compensation in biomedical imaging |
JP2014153663A (en) * | 2013-02-13 | 2014-08-25 | Sony Corp | Voice recognition device, voice recognition method and program |
KR101872426B1 (en) * | 2013-03-14 | 2018-06-28 | 인텔 코포레이션 | Depth-based user interface gesture control |
KR102091028B1 (en) * | 2013-03-14 | 2020-04-14 | 삼성전자 주식회사 | Method for providing user's interaction using multi hovering gesture |
WO2014149700A1 (en) * | 2013-03-15 | 2014-09-25 | Intel Corporation | System and method for assigning voice and gesture command areas |
WO2014185808A1 (en) * | 2013-05-13 | 2014-11-20 | 3Divi Company | System and method for controlling multiple electronic devices |
RU2522848C1 (en) * | 2013-05-14 | 2014-07-20 | Федеральное государственное бюджетное учреждение "Национальный исследовательский центр "Курчатовский институт" | Method of controlling device using eye gestures in response to stimuli |
US9144744B2 (en) | 2013-06-10 | 2015-09-29 | Microsoft Corporation | Locating and orienting device in space |
DE102013012285A1 (en) * | 2013-07-24 | 2015-01-29 | Giesecke & Devrient Gmbh | Method and device for value document processing |
KR102094347B1 (en) * | 2013-07-29 | 2020-03-30 | 삼성전자주식회사 | Auto-cleaning system, cleaning robot and controlling method thereof |
CN104349197B (en) * | 2013-08-09 | 2019-07-26 | 联想(北京)有限公司 | A kind of data processing method and device |
US9645651B2 (en) | 2013-09-24 | 2017-05-09 | Microsoft Technology Licensing, Llc | Presentation of a control interface on a touch-enabled device based on a motion or absence thereof |
US10021247B2 (en) | 2013-11-14 | 2018-07-10 | Wells Fargo Bank, N.A. | Call center interface |
US9864972B2 (en) | 2013-11-14 | 2018-01-09 | Wells Fargo Bank, N.A. | Vehicle interface |
US10037542B2 (en) | 2013-11-14 | 2018-07-31 | Wells Fargo Bank, N.A. | Automated teller machine (ATM) interface |
WO2015076695A1 (en) * | 2013-11-25 | 2015-05-28 | Yandex Llc | System, method and user interface for gesture-based scheduling of computer tasks |
JP2017505553A (en) * | 2013-11-29 | 2017-02-16 | インテル・コーポレーション | Camera control by face detection |
CN105993038A (en) * | 2014-02-07 | 2016-10-05 | 皇家飞利浦有限公司 | Method of operating a control system and control system therefore |
US10691332B2 (en) | 2014-02-28 | 2020-06-23 | Samsung Electronics Company, Ltd. | Text input on an interactive display |
CN106572810A (en) | 2014-03-24 | 2017-04-19 | 凯内蒂科尔股份有限公司 | Systems, methods, and devices for removing prospective motion correction from medical imaging scans |
US9684827B2 (en) * | 2014-03-26 | 2017-06-20 | Microsoft Technology Licensing, Llc | Eye gaze tracking based upon adaptive homography mapping |
US10481561B2 (en) | 2014-04-24 | 2019-11-19 | Vivint, Inc. | Managing home automation system based on behavior |
US10203665B2 (en) * | 2014-04-24 | 2019-02-12 | Vivint, Inc. | Managing home automation system based on behavior and user input |
CN104020878A (en) * | 2014-05-22 | 2014-09-03 | 小米科技有限责任公司 | Touch input control method and device |
US9696813B2 (en) * | 2015-05-27 | 2017-07-04 | Hsien-Hsiang Chiu | Gesture interface robot |
WO2016007192A1 (en) | 2014-07-10 | 2016-01-14 | Ge Intelligent Platforms, Inc. | Apparatus and method for electronic labeling of electronic equipment |
CN106714681A (en) | 2014-07-23 | 2017-05-24 | 凯内蒂科尔股份有限公司 | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
US9594489B2 (en) | 2014-08-12 | 2017-03-14 | Microsoft Technology Licensing, Llc | Hover-based interaction with rendered content |
US20160085958A1 (en) * | 2014-09-22 | 2016-03-24 | Intel Corporation | Methods and apparatus for multi-factor user authentication with two dimensional cameras |
US10268277B2 (en) * | 2014-09-30 | 2019-04-23 | Hewlett-Packard Development Company, L.P. | Gesture based manipulation of three-dimensional images |
KR101556521B1 (en) * | 2014-10-06 | 2015-10-13 | 현대자동차주식회사 | Human Machine Interface apparatus, vehicle having the same and method for controlling the same |
US9946339B2 (en) * | 2014-10-08 | 2018-04-17 | Microsoft Technology Licensing, Llc | Gaze tracking through eyewear |
CA2965329C (en) * | 2014-10-23 | 2023-04-04 | Vivint, Inc. | Managing home automation system based on behavior and user input |
US10301801B2 (en) | 2014-12-18 | 2019-05-28 | Delta Faucet Company | Faucet including capacitive sensors for hands free fluid flow control |
US11078652B2 (en) | 2014-12-18 | 2021-08-03 | Delta Faucet Company | Faucet including capacitive sensors for hands free fluid flow control |
US9454235B2 (en) | 2014-12-26 | 2016-09-27 | Seungman KIM | Electronic apparatus having a sensing unit to input a user command and a method thereof |
US10310080B2 (en) * | 2015-02-25 | 2019-06-04 | The Boeing Company | Three dimensional manufacturing positioning system |
US10481696B2 (en) * | 2015-03-03 | 2019-11-19 | Nvidia Corporation | Radar based user interface |
US9594967B2 (en) | 2015-03-31 | 2017-03-14 | Google Inc. | Method and apparatus for identifying a person by measuring body part distances of the person |
CN107787497B (en) * | 2015-06-10 | 2021-06-22 | 维塔驰有限公司 | Method and apparatus for detecting gestures in a user-based spatial coordinate system |
US9943247B2 (en) | 2015-07-28 | 2018-04-17 | The University Of Hawai'i | Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan |
JP6650595B2 (en) * | 2015-09-24 | 2020-02-19 | パナソニックIpマネジメント株式会社 | Device control device, device control method, device control program, and recording medium |
US10716515B2 (en) | 2015-11-23 | 2020-07-21 | Kineticor, Inc. | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
CN107924239B (en) * | 2016-02-23 | 2022-03-18 | 索尼公司 | Remote control system, remote control method, and recording medium |
US20180164895A1 (en) * | 2016-02-23 | 2018-06-14 | Sony Corporation | Remote control apparatus, remote control method, remote control system, and program |
US10845987B2 (en) * | 2016-05-03 | 2020-11-24 | Intelligent Platforms, Llc | System and method of using touch interaction based on location of touch on a touch screen |
US11079915B2 (en) | 2016-05-03 | 2021-08-03 | Intelligent Platforms, Llc | System and method of using multiple touch inputs for controller interaction in industrial control systems |
DE102016124906A1 (en) * | 2016-12-20 | 2017-11-30 | Miele & Cie. Kg | Method of controlling a floor care appliance and floor care appliance |
US11321951B1 (en) * | 2017-01-19 | 2022-05-03 | State Farm Mutual Automobile Insurance Company | Apparatuses, systems and methods for integrating vehicle operator gesture detection within geographic maps |
CN106919928A (en) * | 2017-03-08 | 2017-07-04 | 京东方科技集团股份有限公司 | gesture recognition system, method and display device |
TWI604332B (en) * | 2017-03-24 | 2017-11-01 | 緯創資通股份有限公司 | Method, system, and computer-readable recording medium for long-distance person identification |
RU2693197C2 (en) * | 2017-05-04 | 2019-07-01 | Федеральное государственное бюджетное образовательное учреждение высшего образования "Сибирский государственный университет телекоммуникаций и информатики" (СибГУТИ) | Universal operator intelligent 3-d interface |
KR102524586B1 (en) | 2018-04-30 | 2023-04-21 | 삼성전자주식회사 | Image display device and operating method for the same |
RU2717145C2 (en) * | 2018-07-23 | 2020-03-18 | Николай Дмитриевич Куликов | Method of inputting coordinates (versions), a capacitive touch screen (versions), a capacitive touch panel (versions) and an electric capacity converter for determining coordinates of a geometric center of a two-dimensional area (versions) |
RU2695053C1 (en) * | 2018-09-18 | 2019-07-18 | Общество С Ограниченной Ответственностью "Заботливый Город" | Method and device for control of three-dimensional objects in virtual space |
KR20200066962A (en) * | 2018-12-03 | 2020-06-11 | 삼성전자주식회사 | Electronic device and method for providing content based on the motion of the user |
EP3667460A1 (en) * | 2018-12-14 | 2020-06-17 | InterDigital CE Patent Holdings | Methods and apparatus for user -device interaction |
RU2737231C1 (en) * | 2020-03-27 | 2020-11-26 | Федеральное государственное бюджетное учреждение науки "Санкт-Петербургский Федеральный исследовательский центр Российской академии наук" (СПб ФИЦ РАН) | Method of multimodal contactless control of mobile information robot |
CN113269075A (en) * | 2021-05-19 | 2021-08-17 | 广州繁星互娱信息科技有限公司 | Gesture track recognition method and device, storage medium and electronic equipment |
US11556183B1 (en) * | 2021-09-30 | 2023-01-17 | Microsoft Technology Licensing, Llc | Techniques for generating data for an intelligent gesture detector |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6111580A (en) * | 1995-09-13 | 2000-08-29 | Kabushiki Kaisha Toshiba | Apparatus and method for controlling an electronic device with user action |
US20030113018A1 (en) * | 2001-07-18 | 2003-06-19 | Nefian Ara Victor | Dynamic gesture recognition from stereo sequences |
US20090140887A1 (en) * | 2007-11-29 | 2009-06-04 | Breed David S | Mapping Techniques Using Probe Vehicles |
US20090268945A1 (en) * | 2003-03-25 | 2009-10-29 | Microsoft Corporation | Architecture for controlling a computer using hand gestures |
US20110154266A1 (en) * | 2009-12-17 | 2011-06-23 | Microsoft Corporation | Camera navigation for presentations |
Family Cites Families (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4836670A (en) * | 1987-08-19 | 1989-06-06 | Center For Innovative Technology | Eye movement detector |
US6351273B1 (en) * | 1997-04-30 | 2002-02-26 | Jerome H. Lemelson | System and methods for controlling automatic scrolling of information on a display or screen |
KR100595922B1 (en) * | 1998-01-26 | 2006-07-05 | 웨인 웨스터만 | Method and apparatus for integrating manual input |
US7224526B2 (en) * | 1999-12-08 | 2007-05-29 | Neurok Llc | Three-dimensional free space image projection employing Fresnel lenses |
GB0004165D0 (en) | 2000-02-22 | 2000-04-12 | Digimask Limited | System for virtual three-dimensional object creation and use |
EP1311803B8 (en) * | 2000-08-24 | 2008-05-07 | VDO Automotive AG | Method and navigation device for querying target information and navigating within a map view |
US6678413B1 (en) * | 2000-11-24 | 2004-01-13 | Yiqing Liang | System and method for object identification and behavior characterization using video analysis |
US7340077B2 (en) | 2002-02-15 | 2008-03-04 | Canesta, Inc. | Gesture recognition system using depth perceptive sensors |
US7883415B2 (en) * | 2003-09-15 | 2011-02-08 | Sony Computer Entertainment Inc. | Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion |
US8019121B2 (en) * | 2002-07-27 | 2011-09-13 | Sony Computer Entertainment Inc. | Method and system for processing intensity from input devices for interfacing with a computer program |
US7372977B2 (en) | 2003-05-29 | 2008-05-13 | Honda Motor Co., Ltd. | Visual tracking using depth data |
US7874917B2 (en) | 2003-09-15 | 2011-01-25 | Sony Computer Entertainment Inc. | Methods and systems for enabling depth and direction detection when interfacing with a computer program |
US7561143B1 (en) * | 2004-03-19 | 2009-07-14 | The University of the Arts | Using gaze actions to interact with a display |
US7893920B2 (en) | 2004-05-06 | 2011-02-22 | Alpine Electronics, Inc. | Operation input device and method of operation input |
JP5631535B2 (en) | 2005-02-08 | 2014-11-26 | オブロング・インダストリーズ・インコーポレーテッド | System and method for a gesture-based control system |
JP2008536196A (en) | 2005-02-14 | 2008-09-04 | ヒルクレスト・ラボラトリーズ・インコーポレイテッド | Method and system for enhancing television applications using 3D pointing |
US8313379B2 (en) | 2005-08-22 | 2012-11-20 | Nintendo Co., Ltd. | Video game system with wireless modular handheld controller |
US8094928B2 (en) | 2005-11-14 | 2012-01-10 | Microsoft Corporation | Stereo video for gaming |
US8549442B2 (en) * | 2005-12-12 | 2013-10-01 | Sony Computer Entertainment Inc. | Voice and video control of interactive electronically simulated environment |
TWI348639B (en) * | 2005-12-16 | 2011-09-11 | Ind Tech Res Inst | Motion recognition system and method for controlling electronic device |
RU2410259C2 (en) * | 2006-03-22 | 2011-01-27 | Фольксваген Аг | Interactive control device and method of operating interactive control device |
US8601379B2 (en) | 2006-05-07 | 2013-12-03 | Sony Computer Entertainment Inc. | Methods for interactive communications with real time effects and avatar environment interaction |
US8277316B2 (en) | 2006-09-14 | 2012-10-02 | Nintendo Co., Ltd. | Method and apparatus for using a common pointing input to control 3D viewpoint and object targeting |
US7775439B2 (en) | 2007-01-04 | 2010-08-17 | Fuji Xerox Co., Ltd. | Featured wands for camera calibration and as a gesture based 3D interface device |
WO2008087652A2 (en) | 2007-01-21 | 2008-07-24 | Prime Sense Ltd. | Depth mapping using multi-beam illumination |
US20090017910A1 (en) * | 2007-06-22 | 2009-01-15 | Broadcom Corporation | Position and motion tracking of an object |
WO2008120217A2 (en) | 2007-04-02 | 2008-10-09 | Prime Sense Ltd. | Depth mapping using projected patterns |
US8494252B2 (en) | 2007-06-19 | 2013-07-23 | Primesense Ltd. | Depth mapping using optical elements having non-uniform focal characteristics |
RU2382408C2 (en) * | 2007-09-13 | 2010-02-20 | Институт прикладной физики РАН | Method and system for identifying person from facial image |
US20090128555A1 (en) | 2007-11-05 | 2009-05-21 | Benman William J | System and method for creating and using live three-dimensional avatars and interworld operability |
US8542907B2 (en) | 2007-12-17 | 2013-09-24 | Sony Computer Entertainment America Llc | Dynamic three-dimensional object mapping for user-defined control device |
CA2615406A1 (en) | 2007-12-19 | 2009-06-19 | Inspeck Inc. | System and method for obtaining a live performance in a video game or movie involving massively 3d digitized human face and object |
US20090172606A1 (en) * | 2007-12-31 | 2009-07-02 | Motorola, Inc. | Method and apparatus for two-handed computer user interface with gesture recognition |
US8192285B2 (en) | 2008-02-11 | 2012-06-05 | Nintendo Co., Ltd | Method and apparatus for simulating games involving a ball |
US20110102570A1 (en) * | 2008-04-14 | 2011-05-05 | Saar Wilf | Vision based pointing device emulation |
JP2009258884A (en) * | 2008-04-15 | 2009-11-05 | Toyota Central R&D Labs Inc | User interface |
CN101344816B (en) * | 2008-08-15 | 2010-08-11 | 华南理工大学 | Human-machine interaction method and device based on sight tracing and gesture discriminating |
US20100079413A1 (en) * | 2008-09-29 | 2010-04-01 | Denso Corporation | Control device |
JP2010086336A (en) * | 2008-09-30 | 2010-04-15 | Fujitsu Ltd | Image control apparatus, image control program, and image control method |
WO2010045406A2 (en) * | 2008-10-15 | 2010-04-22 | The Regents Of The University Of California | Camera system with autonomous miniature camera and light source assembly and method for image enhancement |
US20100195867A1 (en) | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Visual target tracking using model fitting and exemplar |
US20100199228A1 (en) | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Gesture Keyboarding |
US8253746B2 (en) * | 2009-05-01 | 2012-08-28 | Microsoft Corporation | Determine intended motions |
US9176628B2 (en) * | 2009-07-23 | 2015-11-03 | Hewlett-Packard Development Company, L.P. | Display with an optical sensor |
US8502864B1 (en) * | 2009-07-28 | 2013-08-06 | Robert Watkins | Systems, devices, and/or methods for viewing images |
KR101596890B1 (en) | 2009-07-29 | 2016-03-07 | 삼성전자주식회사 | Apparatus and method for navigation digital object using gaze information of user |
US8565479B2 (en) | 2009-08-13 | 2013-10-22 | Primesense Ltd. | Extraction of skeletons from 3D maps |
GB2483168B (en) | 2009-10-13 | 2013-06-12 | Pointgrab Ltd | Computer vision gesture based control of a device |
KR20110071213A (en) | 2009-12-21 | 2011-06-29 | 한국전자통신연구원 | Apparatus and method for 3d face avatar reconstruction using stereo vision and face detection unit |
US20110216059A1 (en) | 2010-03-03 | 2011-09-08 | Raytheon Company | Systems and methods for generating real-time three-dimensional graphics in an area of interest |
US8351651B2 (en) | 2010-04-26 | 2013-01-08 | Microsoft Corporation | Hand-location post-process refinement in a tracking system |
US20110289455A1 (en) | 2010-05-18 | 2011-11-24 | Microsoft Corporation | Gestures And Gesture Recognition For Manipulating A User-Interface |
US20110296333A1 (en) * | 2010-05-25 | 2011-12-01 | Bateman Steven S | User interaction gestures with virtual keyboard |
US20110292036A1 (en) | 2010-05-31 | 2011-12-01 | Primesense Ltd. | Depth sensor with application interface |
US20120200600A1 (en) * | 2010-06-23 | 2012-08-09 | Kent Demaine | Head and arm detection for virtual immersion systems and methods |
US8593375B2 (en) * | 2010-07-23 | 2013-11-26 | Gregory A Maltz | Eye gaze user interface and method |
US20120056982A1 (en) * | 2010-09-08 | 2012-03-08 | Microsoft Corporation | Depth camera based on structured light and stereo vision |
US9349040B2 (en) | 2010-11-19 | 2016-05-24 | Microsoft Technology Licensing, Llc | Bi-modal depth-image analysis |
US9008904B2 (en) * | 2010-12-30 | 2015-04-14 | GM Global Technology Operations LLC | Graphical vehicle command system for autonomous vehicles on full windshield head-up display |
JP6126076B2 (en) * | 2011-03-29 | 2017-05-10 | クアルコム,インコーポレイテッド | A system for rendering a shared digital interface for each user's perspective |
-
2011
- 2011-07-04 RU RU2011127116/08A patent/RU2455676C2/en active
-
2012
- 2012-05-23 US US13/478,378 patent/US8823642B2/en active Active
- 2012-05-23 US US13/478,457 patent/US20130010207A1/en not_active Abandoned
- 2012-07-04 US US13/541,684 patent/US20130010071A1/en not_active Abandoned
- 2012-07-04 US US13/541,681 patent/US8896522B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6111580A (en) * | 1995-09-13 | 2000-08-29 | Kabushiki Kaisha Toshiba | Apparatus and method for controlling an electronic device with user action |
US20030113018A1 (en) * | 2001-07-18 | 2003-06-19 | Nefian Ara Victor | Dynamic gesture recognition from stereo sequences |
US20090268945A1 (en) * | 2003-03-25 | 2009-10-29 | Microsoft Corporation | Architecture for controlling a computer using hand gestures |
US20090140887A1 (en) * | 2007-11-29 | 2009-06-04 | Breed David S | Mapping Techniques Using Probe Vehicles |
US20110154266A1 (en) * | 2009-12-17 | 2011-06-23 | Microsoft Corporation | Camera navigation for presentations |
Cited By (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9002714B2 (en) | 2011-08-05 | 2015-04-07 | Samsung Electronics Co., Ltd. | Method for controlling electronic apparatus based on voice recognition and motion recognition, and electronic apparatus applying the same |
US9733895B2 (en) | 2011-08-05 | 2017-08-15 | Samsung Electronics Co., Ltd. | Method for controlling electronic apparatus based on voice recognition and motion recognition, and electronic apparatus applying the same |
US20130033649A1 (en) * | 2011-08-05 | 2013-02-07 | Samsung Electronics Co., Ltd. | Method for controlling electronic apparatus based on motion recognition, and electronic apparatus applying the same |
US11065532B2 (en) | 2012-06-04 | 2021-07-20 | Sony Interactive Entertainment Inc. | Split-screen presentation based on user location and controller location |
US20130324244A1 (en) * | 2012-06-04 | 2013-12-05 | Sony Computer Entertainment Inc. | Managing controller pairing in a multiplayer game |
US10150028B2 (en) * | 2012-06-04 | 2018-12-11 | Sony Interactive Entertainment Inc. | Managing controller pairing in a multiplayer game |
US9724597B2 (en) | 2012-06-04 | 2017-08-08 | Sony Interactive Entertainment Inc. | Multi-image interactive gaming device |
US10315105B2 (en) | 2012-06-04 | 2019-06-11 | Sony Interactive Entertainment Inc. | Multi-image interactive gaming device |
WO2015088141A1 (en) * | 2013-12-11 | 2015-06-18 | Lg Electronics Inc. | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
US10269344B2 (en) | 2013-12-11 | 2019-04-23 | Lg Electronics Inc. | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
WO2015130718A1 (en) * | 2014-02-27 | 2015-09-03 | Microsoft Technology Licensing, Llc | Tracking objects during processes |
US9911351B2 (en) | 2014-02-27 | 2018-03-06 | Microsoft Technology Licensing, Llc | Tracking objects during processes |
US20150279369A1 (en) * | 2014-03-27 | 2015-10-01 | Samsung Electronics Co., Ltd. | Display apparatus and user interaction method thereof |
WO2016003100A1 (en) * | 2014-06-30 | 2016-01-07 | Alticast Corporation | Method for displaying information and displaying device thereof |
US9456202B2 (en) * | 2014-07-24 | 2016-09-27 | Eys3D Microelectronics, Co. | Attachable three-dimensional scan module |
US20160029009A1 (en) * | 2014-07-24 | 2016-01-28 | Etron Technology, Inc. | Attachable three-dimensional scan module |
US20160088804A1 (en) * | 2014-09-29 | 2016-03-31 | King Abdullah University Of Science And Technology | Laser-based agriculture system |
US11429345B2 (en) * | 2015-03-17 | 2022-08-30 | Amazon Technologies, Inc. | Remote execution of secondary-device drivers |
US10453461B1 (en) * | 2015-03-17 | 2019-10-22 | Amazon Technologies, Inc. | Remote execution of secondary-device drivers |
US10031722B1 (en) * | 2015-03-17 | 2018-07-24 | Amazon Technologies, Inc. | Grouping devices for voice control |
US11422772B1 (en) * | 2015-03-17 | 2022-08-23 | Amazon Technologies, Inc. | Creating scenes from voice-controllable devices |
US20210326103A1 (en) * | 2015-03-17 | 2021-10-21 | Amazon Technologies, Inc. | Grouping Devices for Voice Control |
US9984686B1 (en) * | 2015-03-17 | 2018-05-29 | Amazon Technologies, Inc. | Mapping device capabilities to a predefined set |
US10976996B1 (en) * | 2015-03-17 | 2021-04-13 | Amazon Technologies, Inc. | Grouping devices for voice control |
US20160310838A1 (en) * | 2015-04-27 | 2016-10-27 | David I. Poisner | Magic wand methods, apparatuses and systems |
US9888090B2 (en) * | 2015-04-27 | 2018-02-06 | Intel Corporation | Magic wand methods, apparatuses and systems |
US20230024006A1 (en) * | 2015-06-12 | 2023-01-26 | Research & Business Foundation Sungkyunkwan University | Embedded system, fast structured light based 3d camera system and method for obtaining 3d images using the same |
US10655951B1 (en) | 2015-06-25 | 2020-05-19 | Amazon Technologies, Inc. | Determining relative positions of user devices |
US11703320B2 (en) | 2015-06-25 | 2023-07-18 | Amazon Technologies, Inc. | Determining relative positions of user devices |
US11340566B1 (en) | 2015-06-30 | 2022-05-24 | Amazon Technologies, Inc. | Interoperability of secondary-device hubs |
US11809150B1 (en) | 2015-06-30 | 2023-11-07 | Amazon Technologies, Inc. | Interoperability of secondary-device hubs |
US10365620B1 (en) | 2015-06-30 | 2019-07-30 | Amazon Technologies, Inc. | Interoperability of secondary-device hubs |
US9692756B2 (en) | 2015-09-24 | 2017-06-27 | Intel Corporation | Magic wand methods, apparatuses and systems for authenticating a user of a wand |
US10328342B2 (en) | 2015-09-24 | 2019-06-25 | Intel Corporation | Magic wand methods, apparatuses and systems for defining, initiating, and conducting quests |
US10122999B2 (en) * | 2015-10-27 | 2018-11-06 | Samsung Electronics Co., Ltd. | Image generating method and image generating apparatus |
US9408452B1 (en) | 2015-11-19 | 2016-08-09 | Khaled A. M. A. A. Al-Khulaifi | Robotic hair dryer holder system with tracking |
US10796694B2 (en) * | 2016-04-29 | 2020-10-06 | VTouch Co., Ltd. | Optimum control method based on multi-mode command of operation-voice, and electronic device to which same is applied |
US20190019515A1 (en) * | 2016-04-29 | 2019-01-17 | VTouch Co., Ltd. | Optimum control method based on multi-mode command of operation-voice, and electronic device to which same is applied |
US20190308326A1 (en) * | 2016-09-28 | 2019-10-10 | Cognex Corporation | Simultaneous kinematic and hand-eye calibration |
US10864639B2 (en) * | 2016-09-28 | 2020-12-15 | Cognex Corporation | Simultaneous kinematic and hand-eye calibration |
WO2018084576A1 (en) * | 2016-11-03 | 2018-05-11 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
US11908465B2 (en) | 2016-11-03 | 2024-02-20 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
US10679618B2 (en) | 2016-11-03 | 2020-06-09 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
US11792189B1 (en) * | 2017-01-09 | 2023-10-17 | United Services Automobile Association (Usaa) | Systems and methods for authenticating a user using an image capture device |
US11095472B2 (en) | 2017-02-24 | 2021-08-17 | Samsung Electronics Co., Ltd. | Vision-based object recognition device and method for controlling the same |
US10644898B2 (en) * | 2017-02-24 | 2020-05-05 | Samsung Electronics Co., Ltd. | Vision-based object recognition device and method for controlling the same |
EP3580692A4 (en) * | 2017-02-24 | 2020-02-26 | Samsung Electronics Co., Ltd. | Vision-based object recognition device and method for controlling the same |
US11290518B2 (en) * | 2017-09-27 | 2022-03-29 | Qualcomm Incorporated | Wireless control of remote devices through intention codes over a wireless connection |
US11481036B2 (en) * | 2018-04-13 | 2022-10-25 | Beijing Jingdong Shangke Information Technology Co., Ltd. | Method, system for determining electronic device, computer system and readable storage medium |
US11521038B2 (en) | 2018-07-19 | 2022-12-06 | Samsung Electronics Co., Ltd. | Electronic apparatus and control method thereof |
US11497961B2 (en) | 2019-03-05 | 2022-11-15 | Physmodo, Inc. | System and method for human motion detection and tracking |
US11547324B2 (en) | 2019-03-05 | 2023-01-10 | Physmodo, Inc. | System and method for human motion detection and tracking |
US11771327B2 (en) | 2019-03-05 | 2023-10-03 | Physmodo, Inc. | System and method for human motion detection and tracking |
US11331006B2 (en) | 2019-03-05 | 2022-05-17 | Physmodo, Inc. | System and method for human motion detection and tracking |
US11826140B2 (en) | 2019-03-05 | 2023-11-28 | Physmodo, Inc. | System and method for human motion detection and tracking |
KR102236727B1 (en) | 2019-05-10 | 2021-04-06 | (주)엔플러그 | Health care system and method using lighting device based on IoT |
KR20200130006A (en) * | 2019-05-10 | 2020-11-18 | 주식회사 엔플러그 | Health care system and method using lighting device based on IoT |
US11732994B1 (en) | 2020-01-21 | 2023-08-22 | Ibrahim Pasha | Laser tag mobile station apparatus system, method and computer program product |
US20220253153A1 (en) * | 2021-02-10 | 2022-08-11 | Universal City Studios Llc | Interactive pepper's ghost effect system |
US11960648B2 (en) * | 2022-04-12 | 2024-04-16 | Robert Bosch Gmbh | Method for determining a current viewing direction of a user of data glasses with a virtual retina display and data glasses |
Also Published As
Publication number | Publication date |
---|---|
US20130009865A1 (en) | 2013-01-10 |
RU2455676C2 (en) | 2012-07-10 |
US8896522B2 (en) | 2014-11-25 |
RU2011127116A (en) | 2011-10-10 |
US8823642B2 (en) | 2014-09-02 |
US20130009861A1 (en) | 2013-01-10 |
US20130010071A1 (en) | 2013-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20130010207A1 (en) | Gesture based interactive control of electronic equipment | |
US11392212B2 (en) | Systems and methods of creating a realistic displacement of a virtual object in virtual reality/augmented reality environments | |
US10861242B2 (en) | Transmodal input fusion for a wearable system | |
US20220334646A1 (en) | Systems and methods for extensions to alternative control of touch-based devices | |
US9921659B2 (en) | Gesture recognition for device input | |
US20130044912A1 (en) | Use of association of an object detected in an image to obtain information to display to a user | |
JP5755712B2 (en) | Improved detection of wave engagement gestures | |
US11383166B2 (en) | Interaction method of application scene, mobile terminal, and storage medium | |
US9658695B2 (en) | Systems and methods for alternative control of touch-based devices | |
WO2018098861A1 (en) | Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus | |
US20150177842A1 (en) | 3D Gesture Based User Authorization and Device Control Methods | |
US20180310171A1 (en) | Interactive challenge for accessing a resource | |
WO2019150269A1 (en) | Method and system for 3d graphical authentication on electronic devices | |
EP2727339A1 (en) | User identification by gesture recognition | |
KR20210023680A (en) | Content creation in augmented reality environment | |
US10937243B2 (en) | Real-world object interface for virtual, augmented, and mixed reality (xR) applications | |
KR20210033394A (en) | Electronic apparatus and controlling method thereof | |
Elshenaway et al. | On-air hand-drawn doodles for IoT devices authentication during COVID-19 | |
WO2013176574A1 (en) | Methods and systems for mapping pointing device on depth map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: 3DIVI, RUSSIAN FEDERATION Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VALIK, ANDREY;ZAITSEV, PAVEL;MOROZOV, DMITRY;REEL/FRAME:028256/0554 Effective date: 20120510 |
|
AS | Assignment |
Owner name: 3DIVI COMPANY, RUSSIAN FEDERATION Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE FROM 3DIVI TO 3DIVI COMPANY PREVIOUSLY RECORDED ON REEL 028256 FRAME 0554. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF PATENT APPLICATION NO. 13478457;ASSIGNORS:VALIK, ANDREY;ZAITSEV, PAVEL;MOROZOV, DMITRY;REEL/FRAME:030667/0967 Effective date: 20130514 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |