`DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim is objected to because of the following informalities: The claim ends with a comma instead of a period. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Gibbs et al. (U. S. Patent Application Publication 20160341959 A1, hereafter ‘959) and in view of Theodorus Franciscus Emilius Maria Overes (U. S. Patent Application Publication 2009/0225065 A1, hereafter ‘065).
Regarding claim 1, Gibbs teaches a system (‘959; fig. 1; ¶ 0031, The present invention relates to methods, devices, systems and arrangements for the processing and generation of images for head mounted display (HMD) devices. Various implementations allow one user of an HMD device to view the facial expressions of another user, even when all of the users are wearing HMD devices that cover a large portion of their faces) for providing a simulation-based virtual reality (‘959; fig. 1; ¶ 0039, in various embodiments, the image of the user 105a wearing the HMD device 110a (e.g., as shown in FIG. 2A) is “patched” such that the HMD device 110a is removed. The patch image fills in the space covered by the HMD device and simulates the facial features and movements of the user (e.g., eyes, gaze, eyelid or eyebrow movements, etc.) that would otherwise be hidden by the HMD device 110a. In various embodiments, the patch image is based on a rendered 3D model of corresponding portions of the user's face and/or reflects in real time the facial movements and expressions of the user 105a. This allows a viewer to see a simulation of the entire face of the user e.g., similar to the face 210 illustrated in FIG. 2B.) to a user of the system (‘959; fig. 1, elements 105a and 105b; ¶ 0033, An example of a system 100 for users of HMD devices is illustrated in FIG. 1. The communication system 100 includes two users, user 105a and user 105b. Each user is wearing a HMD device 110a/110b and is situated in a room (rooms 130a and 130b)), comprising an illumination unit (‘959; fig. 19, element 1925, light emitter unit; ¶ 0084, The light emitter unit 1925 includes one or more light emitting devices. Any suitable type of light emitting device or light emitting technology may be used ( e.g., an infrared light, an LED light) and each light emitting device may be positioned on any surface or part of the HMD device 1600.) designed to variably illuminate at least a part of a face of the user, in particular a nose of the user (‘959; figs. 16 and 17; ¶ 0084, The light emitter unit 1925 includes one or more light emitting devices. Any suitable type of light emitting device or light emitting technology may be used (e.g., an infrared light, an LED light) and each light emitting device may be positioned on any surface or part of the HMD device 1600, one light source on either side of the nose), and a control unit (‘959; fig. 19, element 1000, HMD device 1600 control processing unit), wherein the control unit is designed to receive information from a computing unit (‘959; fig. 20, element 2000, image processing device control module 2000; ¶ 0097, The imaging processing device 2000 includes a processor unit 2005 that includes one or more processors, a storage unit 2010, an image processing device control module 2020, an optional sensor unit 2025 and a network interface unit 2015), wherein the computing unit is designed for the purpose of creating a model of the face (‘959; ¶ 0100, The image processing device control module 2020 (a function block of computing unit 2000 of fig. 20) is any hardware or software arranged to perform any of the operations or methods (e.g., steps 305-310, steps 320-385 of FIGS. 3A and 3B) described in this application that pertain to the camera device 115a. In various embodiments, the image processing device control module 2020 is arranged to cause the device to obtain a 3D model, a background image, HMD device and/or camera device tracking data, render a model, obtain a patch image, merge the patch image and a facial/background image, merge the resulting composite image with a background patch and transmit the composite image (e.g., steps 305-380 of FIGS. 3A and 3B.), in particular a three-dimensional model (3D model) (‘959; ¶ 0100), from three-dimensional information relating to the face (‘959; fig. 4; ¶ 0043; The 3D model 405 may be generated in any suitable manner using any known software or hardware. In various embodiments, the 3D model 405 is generated using a camera or any other suitable scanning device (e.g., camera device 115a, a 3D scanner, etc.) Generally, the scanning device is used to capture the face of the user when the user is not wearing the HMD device 110a (e.g., before step 313, when the user wears the HMD device 110a and steps in front of the camera device 115a to communicate with another user.) In other embodiments, the 3D model 405 is predefined. That is, the model 405 may be generic and/or may not be specifically tailored to or based on the actual face of the user 105a. Such a model is suitable for applications in which the user does not wish another user to see their real face, but perhaps only a face of a virtual reality avatar or character), in particular optically recorded information (‘959; ¶ 0066, in FIG. 9A, much of the face of the user 105a is covered with the HMD device 110a. FIG. 9B represents an image of the corresponding rendered model 905, which due to the real time adjustments made in steps 325 and 340, closely resembles the actual face of the user 105a in the image 410 in terms of pose, orientation and light levels. In contrast to the actual face of the user 105a as seen by the camera device 115a in image 410 or 505 - optically recorded information, the face of the rendered model 905 is not covered with an HMD device 110a. Additionally, due in part to adjustments made to the model in step 325, the facial features (e.g., eyes, eyebrows, eyebrow position, eyelids, gaze, eye rotation, etc.) of the user 105a which are hidden in image 410 (FIG. 9A) by the HMD device 110a are simulated in the rendered model 905 of FIG. 9B.), creating the simulation (‘959; ¶ 0039; Various implementations of the present invention address this issue. In some approaches, a patch image is generated. That is, in various embodiments, the image of the user 105a wearing the HMD device 110a (e.g., as shown in FIG. 2A) is “patched” such that the HMD device 110a is removed. The patch image fills in the space covered by the HMD device and simulates the facial features and movements of the user (e.g., eyes, gaze, eyelid or eyebrow movements, etc.) that would otherwise be hidden by the HMD device 110a. In various embodiments, the patch image is based on a rendered 3D model of corresponding portions of the user's face and/or reflects in real time the facial movements and expressions of the user 105a. This allows a viewer to see a simulation of the entire face of the user e.g., similar to the face 210 illustrated in FIG. 2B), wherein the model of the face is taken into account when creating the simulation (‘959; ¶ 0041, Initially, at step 305, a three dimensional (3D) model 405 of the face of the user 105a is obtained. Generally, the model 405 may be any software model, graphical model and/or mathematical/virtual representation of any part of the user 105a (e.g., the face, head, eyes, eyebrows, any part of the body, etc.) A simplified example is illustrated in FIG. 4. In this example, the 3D model indicates, shows and/or represents the head, face and various facial features of the user 105a (e.g., surface and shape of the head and face, eyes, nose, mouth, skin texture, color, light level, shading, etc.). In various implementations, the model is fully three dimensional (i.e., the model can be rotated or tilted in three dimensions) and/or may be modeled using multiple polygons, a mesh, vertices, edges, etc. The software for the 3D model 405 may also be capable of automatically adjusting features in the model 405 based on input. By way of example, when input is received indicating that the lips should smile, other features in the 3D model 405 may also automatically move in response (e.g., there may be slight movements in the nose or in the area around the lips, etc.), thus simulating the many tiny movements that naturally occur in a human face when one part of the face changes or when a particular expression is desired. The 3D model 405 may simulate the entire face, head or body, or any portion of the head or body of the user 105a.), integrating the model of the face into the simulation, calculating an illumination of the model of the face in the simulation (‘959; ¶ 0041, Initially, at step 305, a three dimensional (3D) model 405 of the face of the user 105a is obtained. Generally, the model 405 may be any software model, graphical model and/or mathematical/virtual representation of any part of the user 105a (e.g., the face, head, eyes, eyebrows, any part of the body, etc.) A simplified example is illustrated in FIG. 4. In this example, the 3D model indicates, shows and/or represents the head, face and various facial features of the user 105a (e.g., surface and shape of the head and face, eyes, nose, mouth, skin texture, color, light level, shading, etc.). In various implementations, the model is fully three dimensional (i.e., the model can be rotated or tilted in three dimensions) and/or may be modeled using multiple polygons, a mesh, vertices, edges, etc. The software for the 3D model 405 may also be capable of automatically adjusting features in the model 405 based on input. By way of example, when input is received indicating that the lips should smile, other features in the 3D model 405 may also automatically move in response (e.g., there may be slight movements in the nose or in the area around the lips, etc.), thus simulating the many tiny movements that naturally occur in a human face when one part of the face changes or when a particular expression is desired. The 3D model 405 may simulate the entire face, head or body, or any portion of the head or body of the user 105a.), and transmitting information to the control unit (‘959; fig. 1, network 125 connecting 110a and 115a; ¶ 0033, An example of a system 100 for users of HMD devices is illustrated in FIG. 1. The communication system 100 includes two users, user 105a and user 105b. Each user is wearing a HMD device 110a/110b and is situated in a room (rooms 130a and 130b). In each room there is also a camera device (camera devices 115a and 115b) that is positioned a short distance away from its respective user. The HMD devices and/or camera devices are coupled with one another using a network 125. In some embodiments, there is a server 120 that helps process media and manage communications between the HMD devices.), wherein the control unit (‘959; fig. 19, element 1000, HMD device 1600 control processing unit) is designed to receive information from the computing unit (‘959; fig. 20, element 2000, image processing device control module 2000) regarding the illumination of the model of the face in the simulation (‘959; ¶ 0056, At step 330, the camera device 115a tracks various features related to the face of the user 105a or the surrounding environment. In some embodiments, for example, the camera device 115a measures or tracks the amount of ambient light (e.g, in the room 130a) using a light sensor. Additionally or alternatively, the camera device 115a detects the amount of light on a portion of the body or face of the user 105a. Any changes in this light level are recorded by the camera device 115a over time; ¶ 0086; The sensor unit 1930 includes one or more sensors that are arranged to help track facial movements of a user who is wearing the HMD device. Any suitable sensor technology may be used, including but not limited to tracking cameras, pressure sensors, temperature sensors, mechanical sensors, motion sensors, light sensors and electrical sensors…positioned on the interior of the HMD), and does not teach wherein the control unit controls the illumination unit based on the information regarding the illumination of the model of the face in the simulation such that an illumination of the face by the illumination unit corresponds to the illumination of the model of the face in the simulation.
Holmes, solving the same problem of controlling the illumination of a face for video camera capturing or imaging, however teaches wherein the control unit controls the illumination unit (‘287; ¶ 0024, For example an LED light ring may be provided around a display along the border of a smartphone. Such additional lighting modules may be integrated into the system such that the lighting system is able to turn on and off additional lighting modules as needed to replace or complement the lighting effects on the display. Control of the additional lighting modules may be manual or automated by the lighting system. Similar to the control of the lighting effects, the control of the additional lighting may be automated based on an analysis of the ambient lighting conditions and/or an analysis of the captured video. The integration of additional lighting modules provides a greater range of options for the system to illuminate the users. The lighting system may be manually controlled by the user or may be automatically controlled by the system to: (i) provide no lighting; (ii) provide lighting using only the device display; (iii) provide lighting using only the lighting modules; or (iv) provide lighting using both the device display and the lighting modules) based on the information regarding the illumination of the model of the face in the simulation such that an illumination of the face by the illumination unit corresponds to the illumination of the model of the face in the simulation (‘287; ¶ 0031; As described herein, in some embodiments, the lighting system will automatically suggest or automatically apply one or more lighting effects in response to the ambient environment. For example, when the device determines that the quality of the lighting in the content to be captured would be improved by providing one or more lighting effects (for example, by observing the ambient lighting conditions using the front facing camera), the lighting system may present the user with an option to activate one or more lighting elements or to authorize the lighting elements to be automatically activated by the lighting system) for the benefit of improving the quality of video images captured for the system for providing a simulation-based virtual reality to a user.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for the control unit controls the illumination unit based on the information regarding the illumination of the model of the face in the simulation such that an illumination of the face by the illumination unit corresponds to the illumination of the model of the face in the simulation as taught by Holmes with the system for image processing for head mounted devices which provide a simulated virtual reality to a user as taught by Gibbs for the benefit of improving the quality of video images captured for the system for providing a simulation-based virtual reality to a user.
Regarding claim 2, Gibbs and Holmes teach the system according to claim 1 and further teach wherein the system comprises the computing unit (‘959; fig. 20, element 2000, image processing device control module 2000), comprises a first recording unit, in particular a camera (‘959; ¶ 0034, In this example, the camera device 115a directs a camera at the face of the user 105a in room 130a. The camera device 115a is arranged to record the facial expression and movements of the user 105a, as well as possibly the background of the room 130a behind the user 105a. In some implementations, the camera device 115a captures a live video stream of the user 105a), wherein the first recording unit is designed to record the three-dimensional information relating to the face (‘959; ¶ 0034, In this example, the camera device 115a directs a camera at the face of the user 105a in room 130a. The camera device 115a is arranged to record the facial expression and movements of the user 105a, as well as possibly the background of the room 130a behind the user 105a. In some implementations, the camera device 115a captures a live video stream of the user 105a), in particular optically (‘959; ¶ 0034, In this example, the camera device 115a directs a camera at the face of the user 105a in room 130a. The camera device 115a is arranged to record the facial expression and movements of the user 105a, as well as possibly the background of the room 130a behind the user 105a. In some implementations, the camera device 115a captures a live video stream of the user 105a), and/or comprises a visual output device (‘959; [0003] In the last several years, virtual reality applications have received increasing attention. Virtual reality applications involve technologies that immerse the user in a different environment. In some applications, the user wears a head mounted display (HMD) device. Examples of such devices include the Samsung Gear VR® and the Oculus Rift®. Typically, when users wear such devices, their eyes are completely covered. The HMD device includes an interior screen that displays images and animation for the user. The images give the user the impression that they have been transported into a virtual environment.), wherein the visual output device is designed to display the virtual reality to the user, and/or has a display unit (‘959; ¶ 0089, The display unit 1920 is any hardware or software used to display images, video or graphics to a user of the HMD device 1600. An example position for the display unit is illustrated in FIG. 17. In the illustrated embodiment, when the user wears the HMD device 1600, the display unit 1920 is positioned directly over the eyes of the user. The display unit 1920 can use any suitable display and/or projection technology to show images to the user. In various embodiments, the display unit 1920 is arranged to provide the user with a virtual reality experience. That is, the HMD device 1600 completely covers the eyes of the user, therefore removing the user's ability to see his physical surroundings. Once the display unit 1920 is activated, the only thing the user is able to see are the graphics and images that the display unit 1920 generates. This can give the user the impression that he or she is an entirely different, virtual environment. In some designs, when the user turns his or her head, sensors in the HMD device 1600 detect the motion and cause the images to shift to give an impression that the user is actually physically in the simulated environment and can explore it just as the user would any real, physical environment (i.e., by turning one's head, peering around a corner, looking up and down an object, etc.) The display unit 1620 is further arranged to display the face of another HMD device user in real time, with (partially) simulated facial expressions, as described in step 385 of FIG. 3B. Any known virtual reality display technology may be used in the display unit 1920.), wherein the display unit is designed as a screen, wherein the simulation can be displayed on the screen (‘959; ¶ 0089, The display unit 1920 is any hardware or software used to display images, video or graphics to a user of the HMD device 1600. An example position for the display unit is illustrated in FIG. 17. In the illustrated embodiment, when the user wears the HMD device 1600, the display unit 1920 is positioned directly over the eyes of the user. The display unit 1920 can use any suitable display and/or projection technology to show images to the user. In various embodiments, the display unit 1920 is arranged to provide the user with a virtual reality experience. That is, the HMD device 1600 completely covers the eyes of the user, therefore removing the user's ability to see his physical surroundings. Once the display unit 1920 is activated, the only thing the user is able to see are the graphics and images that the display unit 1920 generates. This can give the user the impression that he or she is an entirely different, virtual environment. In some designs, when the user turns his or her head, sensors in the HMD device 1600 detect the motion and cause the images to shift to give an impression that the user is actually physically in the simulated environment and can explore it just as the user would any real, physical environment (i.e., by turning one's head, peering around a corner, looking up and down an object, etc.) The display unit 1620 is further arranged to display the face of another HMD device user in real time, with (partially) simulated facial expressions, as described in step 385 of FIG. 3B. Any known virtual reality display technology may be used in the display unit 1920.).
Regarding claim 3, Gibbs and Holmes teach the system according to claim 1 and further teach wherein the computing unit uses setting parameters, in particular setting parameters such as skin color, accessories such as glasses and/or jewelry, light preferences, in particular light intensity, light temperature, light color and/or illuminance, of the illumination in the simulation, in particular of the illumination of the model of the face in the simulation (‘959; ¶ 0041, Initially, at step 305, a three dimensional (3D) model 405 of the face of the user 105a is obtained. Generally, the model 405 may be any software model, graphical model and/or mathematical/virtual representation of any part of the user 105a (e.g., the face, head, eyes, eyebrows, any part of the body, etc.) A simplified example is illustrated in FIG. 4. In this example, the 3D model indicates, shows and/or represents the head, face and various facial features of the user 105a (e.g., surface and shape of the head and face, eyes, nose, mouth, skin texture, color, light level, shading, etc.). In various implementations, the model is fully three dimensional (i.e., the model can be rotated or tilted in three dimensions) and/or may be modeled using multiple polygons, a mesh, vertices, edges, etc.), and/or of the illumination of the face by the illumination unit, which can be provided by a manual input of the user and/or by an input of a provider of the simulation, in particular via a programming interface, for the creation of the model of the face (‘959; fig. 20, element 2000, image processing device control module 2000) regarding the illumination of the model of the face in the simulation (‘959; ¶ 0056, At step 330, the camera device 115a tracks various features related to the face of the user 105a or the surrounding environment. In some embodiments, for example, the camera device 115a measures or tracks the amount of ambient light (e.g, in the room 130a) using a light sensor. Additionally or alternatively, the camera device 115a detects the amount of light on a portion of the body or face of the user 105a. Any changes in this light level are recorded by the camera device 115a over time; ¶ 0086; The sensor unit 1930 includes one or more sensors that are arranged to help track facial movements of a user who is wearing the HMD device. Any suitable sensor technology may be used, including but not limited to tracking cameras, pressure sensors, temperature sensors, mechanical sensors, motion sensors, light sensors and electrical sensors…positioned on the interior of the HMD), the creation of the simulation, the integration of the model of the face into the simulation, and/or the calculation of the illumination of the model of the face in the simulation (‘959; ¶ 0041, Initially, at step 305, a three dimensional (3D) model 405 of the face of the user 105a is obtained. Generally, the model 405 may be any software model, graphical model and/or mathematical/virtual representation of any part of the user 105a (e.g., the face, head, eyes, eyebrows, any part of the body, etc.) A simplified example is illustrated in FIG. 4. In this example, the 3D model indicates, shows and/or represents the head, face and various facial features of the user 105a (e.g., surface and shape of the head and face, eyes, nose, mouth, skin texture, color, light level, shading, etc.). In various implementations, the model is fully three dimensional (i.e., the model can be rotated or tilted in three dimensions) and/or may be modeled using multiple polygons, a mesh, vertices, edges, etc. The software for the 3D model 405 may also be capable of automatically adjusting features in the model 405 based on input. By way of example, when input is received indicating that the lips should smile, other features in the 3D model 405 may also automatically move in response (e.g., there may be slight movements in the nose or in the area around the lips, etc.), thus simulating the many tiny movements that naturally occur in a human face when one part of the face changes or when a particular expression is desired. The 3D model 405 may simulate the entire face, head or body, or any portion of the head or body of the user 105a.).
Regarding claim 4, Gibbs and Holmes teach the system according to claim 2 and further teach wherein the first receiving unit is arranged on the visual output device (‘959; fig. 17, elements 1605), and/or records movements of the face as updated three-dimensional information relating to the face (‘959; ¶ 0051-0052; records 3D movements of the face) and the computing unit adapts the model of the face based on the updated three-dimensional information relating to the face (‘959; ¶ 0039, Various implementations of the present invention address this issue. In some approaches, a patch image is generated. That is, in various embodiments, the image of the user 105a wearing the HMD device 110a (e.g., as shown in FIG. 2A) is “patched” such that the HMD device 110a is removed. The patch image fills in the space covered by the HMD device and simulates the facial features and movements of the user (e.g., eyes, gaze, eyelid or eyebrow movements, etc.) that would otherwise be hidden by the HMD device 110a. In various embodiments, the patch image is based on a rendered 3D model of corresponding portions of the user's face and/or reflects in real time the facial movements and expressions of the user 105a. This allows a viewer to see a simulation of the entire face of the user e.g., similar to the face 210 illustrated in FIG. 2B.).
Regarding claim 5, Gibbs and Holmes teach the system according to claim 2 and further teach wherein the visual output device comprises a first motion sensor adapted to detect a head movement of the user (‘959; ¶ 0086, The sensor unit 1930 includes one or more sensors that are arranged to help track facial movements of a user who is wearing the HMD device. Any suitable sensor technology may be used, including but not limited to tracking cameras, pressure sensors, temperature sensors, mechanical sensors, motion sensors, light sensors and electrical sensors; ¶ 0046, The image 505 thus portrays the face 410 of the user, the HMD device 110a, parts of the user's body as well as portions of the background.).
Regarding claim 6, Gibbs and Holmes teach the system according to claim 1 and further teach wherein the system comprises at least one movement unit, wherein the at least one movement unit is arranged on a part of the user's body, in particular on a hand (‘959; ¶ 0086; The sensor unit 1930 includes one or more sensors that are arranged to help track facial movements of a user who is wearing the HMD device. Any suitable sensor technology may be used, including but not limited to tracking cameras, pressure sensors, temperature sensors, mechanical sensors, motion sensors, light sensors and electrical sensors.), and has a second motion sensor which is designed to detect a movement of said body part (‘959; ¶ 0086; The sensor unit 1930 includes one or more sensors that are arranged to help track facial movements of a user who is wearing the HMD device. Any suitable sensor technology may be used, including but not limited to tracking cameras, pressure sensors, temperature sensors, mechanical sensors, motion sensors, light sensors and electrical sensors; ¶ 0046, The image 505 thus portrays the face 410 of the user, the HMD device 110a, parts of the user's body as well as portions of the background.).
Regarding claim 7, Gibbs and Holmes teach the system according to claim 5 and further teach wherein the visual output device and the at least one movement unit each have at least one position sensor, wherein the computing unit determines a position of the visual output device and, in this way, of the user's face relative to a position of the at least one movement unit (‘959; ¶ 0086, The sensor unit 1930 includes one or more sensors that are arranged to help track facial movements of a user who is wearing the HMD device. Any suitable sensor technology may be used, including but not limited to tracking cameras, pressure sensors, temperature sensors, mechanical sensors, motion sensors, light sensors and electrical sensors. In various embodiments, for example, there are one or more electromyography (EMG), and electrooculography (EOG) sensors positioned on the interior/back side of the HMD device. An example set of EMG sensors 1605 and EOG sensors is illustrated in FIG. 17. In this example, the EMG sensors 1605 and the EOG sensors 1605a are situated on or within a cushion on the HMD device 1600. When the HMD device 1600 is worn, the cushion and the sensors 1605 are pressed flush against the eyebrows of the user. When the eyebrows move, the sensors 1605 detect the electrical impulses generated by the muscle movement. The HMD device 1600 then transmits this sensor data to the camera device 115a for processing, as described in steps 315, 320 and 325 of FIG. 3A. Such EMG sensors may be arranged to detect motion in other parts of the face of a user as well).
Regarding claim 8, Gibbs and Holmes teach the system according to claim 5 and further teach wherein the computing unit uses the data provided by the first motion sensor, the second motion sensor and/or the respective position sensors for creating the simulation (‘959; ; ¶ 0086; The sensor unit 1930 includes one or more sensors that are arranged to help track facial movements of a user who is wearing the HMD device. Any suitable sensor technology may be used, including but not limited to tracking cameras, pressure sensors, temperature sensors, mechanical sensors, motion sensors, light sensors and electrical sensors…positioned on the interior of the HMD), and/or calculating the illumination of the model of the face in the simulation (‘959; ¶ 0058, Of course, if the camera device 115a is the only device that generates tracking/light sensing data, the camera device 115a can obtain the data without requiring any transfer of the data through a network. In still other embodiments, the camera device 115a is replaced by an image processing device ( e.g., the image processing device 2000 of FIG. 20, the server 120 of FIG. 1, etc.), which does not itself capture images using a camera and/or sense ambient light levels, but instead obtains tracking/lighting/sensor data from one or more camera/light sensing devices/sensors positioned in the room 130a and processes the obtained data (e.g., as described in steps 320, 325, 335-380 of FIGS. 3A and 3B.),
Regarding claim 9, Gibbs and Holmes teach the system according to claim 3 and further teach wherein, for creating the simulation, and/or calculating the illumination of the model of the face in the simulation, in particular taking into account the provided setting parameters, the computing unit calculates the direction from which light from a light source provided in the simulation shines on the model of the face in the simulation, how the light, in particular light intensity, light temperature, light color and/or illuminance, of the light source illuminates the model of the face in the simulation, how a facial geometry derived from the three-dimensional information relating to the face recorded by the first recording unit influences the illumination of the model of the face in the simulation, where the model of the face is located in the simulation relative to the light source, whether other body parts, in particular the body part on which the at least one movement unit is arranged, cast shadows on the model of the face in the simulation, whereby in particular the illuminance is reduced in areas in which shadows are cast on the model of the face in the simulation, how weather influences in the simulation affect the illumination of the model of the face in the simulation, in particular with regard to light intensity, light temperature, illuminance and/or reflections, how far away the light source is from the model of the face in the simulation, and/or how the light from the light source is reflected by objects in the simulation, in particular wherein the computing unit uses light sources of the simulation that are in the user's field of vision and/or light sources of the simulation that are not in the user's field of vision for the calculation.
Regarding claim 10, Gibbs and Holmes teach the system according to claim 2 and further teach wherein the first recording unit detects the user's line of vision (‘959; ¶ 0034, In this example, the camera device 115a directs a camera at the face of the user 105a in room 130a. The camera device 115a is arranged to record the facial expression and movements of the user 105a, as well as possibly the background of the room 130a behind the user 105a. In some implementations, the camera device 115a captures a live video stream of the user 105a), the computing unit derives a field of view of the user from the direction of gaze detected by the first recording unit, and the control unit controls the illumination unit in such a way that the illumination is dimmed and/or is switched off on the parts of the face that are not in the user's field of vision (‘287; ¶ 0031; As described herein, in some embodiments, the lighting system will automatically suggest or automatically apply one or more lighting effects in response to the ambient environment. For example, when the device determines that the quality of the lighting in the content to be captured would be improved by providing one or more lighting effects (for example, by observing the ambient lighting conditions using the front facing camera), the lighting system may present the user with an option to activate one or more lighting elements or to authorize the lighting elements to be automatically activated by the lighting system), and/or the illumination is intensified on the parts of the face that are in the user's field of vision (‘287; ¶ 0031; As described herein, in some embodiments, the lighting system will automatically suggest or automatically apply one or more lighting effects in response to the ambient environment. For example, when the device determines that the quality of the lighting in the content to be captured would be improved by providing one or more lighting effects (for example, by observing the ambient lighting conditions using the front facing camera), the lighting system may present the user with an option to activate one or more lighting elements or to authorize the lighting elements to be automatically activated by the lighting system).
Regarding claim 11, Gibbs and Holmes teach the system according to claim 1 and further teach wherein the illumination unit has a light source, in particular a light-emitting diode (LED) (959; [0084] The light emitter unit 1925 includes one or more light emitting devices. Any suitable type of light emitting device or light emitting technology may be used ( e.g., an infrared light, an LED light) and each light emitting device may be positioned on any surface or part of the HMD device 1600.).
Regarding claim 12, Gibbs and Holmes teach The system according to claim 1 and further teach wherein the illumination unit has two light sources, in particular LEDs, (959; [0084] The light emitter unit 1925 includes one or more light emitting devices. Any suitable type of light emitting device or light emitting technology may be used ( e.g., an infrared light, an LED light) and each light emitting device may be positioned on any surface or part of the HMD device 1600.) wherein a first light source, in particular a first LED, is arranged and aligned in such a way that an upper part of the user's nose is illuminated (959; [0084] The light emitter unit 1925 includes one or more light emitting devices. Any suitable type of light emitting device or light emitting technology may be used ( e.g., an infrared light, an LED light) and each light emitting device may be positioned on any surface or part of the HMD device 1600.), and a second light source, in particular a second LED, is arranged and aligned in such a way that a lower part of the user's nose is illuminated (959; [0084] The light emitter unit 1925 includes one or more light emitting devices. Any suitable type of light emitting device or light emitting technology may be used ( e.g., an infrared light, an LED light) and each light emitting device may be positioned on any surface or part of the HMD device 1600.).
Regarding claim 13, Gibbs and Holmes teach the system according to claim 1 and further teach wherein the illumination unit comprises a light directing element, in particular a lens, and/or an aperture (‘959; ¶ 0051; These light sources are positioned on the interior of the HMD device and are arranged to project light at the eyes, eye region or other portions of the face covered by the HMD device), each designed to focus light from the illumination unit onto a part of the user's face, in particular the user's nose (‘959; figs. 16 and 17; ¶ 0084, The light emitter unit 1925 includes one or more light emitting devices. Any suitable type of light emitting device or light emitting technology may be used (e.g., an infrared light, an LED light) and each light emitting device may be positioned on any surface or part of the HMD device 1600, one light source on either side of the nose).
Regarding claim 14, Gibbs and Holmes teach the system according to claim 13 and further teach wherein an arrangement of the light directing element and/or the aperture on the illumination unit can be changed in order to make the illumination unit adaptable to faces of different users (‘959; ¶ 0051, ¶ 0051, In the illustrated embodiment, the HMD device 110a is arranged to track gaze or eye rotation, as well as one or more facial movements (e.g., the movements of the upper and/or lower eyelids.) Such tracking may be performed using any known technology or technique (e.g., optical tracking, electrooculography etc.) In some approaches, the HMD device 110a includes one or more light sources and sensors (e.g., tracking cameras.) These devices are generally positioned on the interior of the HMD device 110a i.e., once the HMD device 110a is worn by the user, the sensors/cameras are hidden from view, have access to and are in close proximity to the portion of the face that is covered by the HMD device 110a. In this example, there are one or more tracking cameras that capture images of one or more facial features (e.g., eyelid, gaze, eye rotation, etc.). The images are later processed to determine the movement of the facial feature over time. In various embodiments, the HMD device also includes one or more light sources. These light sources are positioned on the interior of the HMD device and are arranged to project light at the eyes, eye region or other portions of the face covered by the HMD device. Each light source emits light that facilitates the tracking of the facial features (e.g., infrared light.) The light source(s) and camera(s) may be positioned in any suitable configuration on the HMD device 110a. (An example arrangement of IR light sources 1610 and tracking cameras 1615 in the HMD device 110a are illustrated in FIG. 17).
Regarding claim 15, Gibbs and Holmes teach the system according to claim 2 and further teach wherein the system has a second recording unit, in particular a laser tracking system (‘959; ¶ 0051, ¶ 0051, In the illustrated embodiment, the HMD device 110a is arranged to track gaze or eye rotation, as well as one or more facial movements (e.g., the movements of the upper and/or lower eyelids.) Such tracking may be performed using any known technology or technique (e.g., optical tracking, electrooculography etc.) In some approaches, the HMD device 110a includes one or more light sources and sensors (e.g., tracking cameras.) These devices are generally positioned on the interior of the HMD device 110a i.e., once the HMD device 110a is worn by the user, the sensors/cameras are hidden from view, have access to and are in close proximity to the portion of the face that is covered by the HMD device 110a. In this example, there are one or more tracking cameras that capture images of one or more facial features (e.g., eyelid, gaze, eye rotation, etc.). The images are later processed to determine the movement of the facial feature over time. In various embodiments, the HMD device also includes one or more light sources. These light sources are positioned on the interior of the HMD device and are arranged to project light at the eyes, eye region or other portions of the face covered by the HMD device. Each light source emits light that facilitates the tracking of the facial features (e.g., infrared light.) The light source(s) and camera(s) may be positioned in any suitable configuration on the HMD device 110a. (An example arrangement of IR light sources 1610 and tracking cameras 1615 in the HMD device 110a are illustrated in FIG. 17), wherein the second recording unit is arranged at a distance, in particular a predefined distance, from the user, so that three-dimensional information relating to an entire body of the user or at least the entire face can be recorded by the second recording unit (‘959; ¶ 0051, ¶ 0051, In the illustrated embodiment, the HMD device 110a is arranged to track gaze or eye rotation, as well as one or more facial movements (e.g., the movements of the upper and/or lower eyelids.) Such tracking may be performed using any known technology or technique (e.g., optical tracking, electrooculography etc.) In some approaches, the HMD device 110a includes one or more light sources and sensors (e.g., tracking cameras.) These devices are generally positioned on the interior of the HMD device 110a i.e., once the HMD device 110a is worn by the user, the sensors/cameras are hidden from view, have access to and are in close proximity to the portion of the face that is covered by the HMD device 110a. In this example, there are one or more tracking cameras that capture images of one or more facial features (e.g., eyelid, gaze, eye rotation, etc.). The images are later processed to determine the movement of the facial feature over time. In various embodiments, the HMD device also includes one or more light sources. These light sources are positioned on the interior of the HMD device and are arranged to project light at the eyes, eye region or other portions of the face covered by the HMD device. Each light source emits light that facilitates the tracking of the facial features (e.g., infrared light.) The light source(s) and camera(s) may be positioned in any suitable configuration on the HMD device 110a. (An example arrangement of IR light sources 1610 and tracking cameras 1615 in the HMD device 110a are illustrated in FIG. 17), in particular wherein three-dimensional information relating to the parts of the face which are obscured by the visual output device cannot be recorded by the second recording unit, the second recording unit is designed to record the head movement of the user, movements of parts of the user's body and/or a position of the face relative to the body, in particular to moving parts of the user's body (‘959; ¶ 0056, At step 330, the camera device 115a tracks various features related to the face of the user 105a or the surrounding environment. In some embodiments, for example, the camera device 115a measures or tracks the amount of ambient light (e.g, in the room 130a) using a light sensor. Additionally or alternatively, the camera device 115a detects the amount of light on a portion of the body or face of the user 105a. Any changes in this light level are recorded by the camera device 115a over time; ¶ 0086; The sensor unit 1930 includes one or more sensors that are arranged to help track facial movements of a user who is wearing the HMD device. Any suitable sensor technology may be used, including but not limited to tracking cameras, pressure sensors, temperature sensors, mechanical sensors, motion sensors, light sensors and electrical sensors…positioned on the interior of the HMD), and/or the computing unit processes the three-dimensional information recorded by the second recording unit combined with the three-dimensional information relating to the face captured by the first recording unit (‘959; ¶ 0039; Various implementations of the present invention address this issue. In some approaches, a patch image is generated. That is, in various embodiments, the image of the user 105a wearing the HMD device 110a (e.g., as shown in FIG. 2A) is “patched” such that the HMD device 110a is removed. The patch image fills in the space covered by the HMD device and simulates the facial features and movements of the user (e.g., eyes, gaze, eyelid or eyebrow movements, etc.) that would otherwise be hidden by the HMD device 110a. In various embodiments, the patch image is based on a rendered 3D model of corresponding portions of the user's face and/or reflects in real time the facial movements and expressions of the user 105a. This allows a viewer to see a simulation of the entire face of the user e.g., similar to the face 210 illustrated in FIG. 2B), and/or for the creation of the model of the face, the creation of the simulation and/or the calculation of the illumination of the model of the face in the simulation (‘959; ¶ 0041, Initially, at step 305, a three dimensional (3D) model 405 of the face of the user 105a is obtained. Generally, the model 405 may be any software model, graphical model and/or mathematical/virtual representation of any part of the user 105a (e.g., the face, head, eyes, eyebrows, any part of the body, etc.) A simplified example is illustrated in FIG. 4. In this example, the 3D model indicates, shows and/or represents the head, face and various facial features of the user 105a (e.g., surface and shape of the head and face, eyes, nose, mouth, skin texture, color, light level, shading, etc.). In various implementations, the model is fully three dimensional (i.e., the model can be rotated or tilted in three dimensions) and/or may be modeled using multiple polygons, a mesh, vertices, edges, etc. The software for the 3D model 405 may also be capable of automatically adjusting features in the model 405 based on input. By way of example, when input is received indicating that the lips should smile, other features in the 3D model 405 may also automatically move in response (e.g., there may be slight movements in the nose or in the area around the lips, etc.), thus simulating the many tiny movements that naturally occur in a human face when one part of the face changes or when a particular expression is desired. The 3D model 405 may simulate the entire face, head or body, or any portion of the head or body of the user 105a.).
Conclusion
The following prior art, made of record, was not relied upon but is considered pertinent to applicant's disclosure:
US 20210319621 A1 Face Modeling Method And Apparatus, Electronic Device And Computer-Readable Medium – A face modeling method and apparatus, an electronic device and a computer-readable medium. Said method comprises: acquiring multiple depth images, the multiple depth images being obtained by photographing a target face at different irradiation angles; performing alignment processing on the multiple depth images to obtain a target point cloud image; using the target point cloud image to construct a three-dimensional model of the target face. The present disclosure alleviates the technical problems of poor robustness and low precision of the three-dimensional model constructed according to the three-dimensional model constructing method.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD MARTELLO whose telephone number is (571)270-1883. The examiner can normally be reached on M-F from 9AM to 5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard, can be reached at telephone number (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDWARD MARTELLO/
Primary Examiner, Art Unit 2611