Prosecution Insights
Last updated: April 19, 2026
Application No. 18/458,708

Generating Realistic Avatars for Extended Reality

Non-Final OA §103§112
Filed
Aug 30, 2023
Examiner
LE, SARAH
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Meta Platforms Inc.
OA Round
3 (Non-Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
172 granted / 258 resolved
+4.7% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
22 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 258 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/07/2026 has been entered. Response to Arguments Applicant's arguments filed 01/07/2026 have been fully considered but they are not persuasive. Claims 1, 12 and 20 have been amended. In summary, claims 1-20 are pending in this application. Claim Rejections - 35 USC § 103 Applicant’s arguments filed 1/7/2026 have been fully considered but they are not persuasive. Regarding independent claim, Applicant argues that “the cited references fail to teach or suggest a configuration in which an image based on a reflection of each color component is captured and thereafter combined when generating a realistic avatar of a user.” Examiner found Bardagjy teaches monochrome color at least three colors: red, green and blue at col.2, lines 46-54 “the electronic display may illuminate the face using structured light having a pattern of monochrome colors (e.g., red, green, blue, etc.). The light may be emitted simultaneously or separately from the display of other by the electronic display. A controller processes images of the face captured by a camera assembly to determine facial data (e.g., depth information or color information), which may be used to update a virtual avatar of the user”. Bardagjy also teaches capturing images of the face of a user, that light reflected off the user at col.3, lines 65-67-col.4, lines 1-14“ FIG. 2 is a wire diagram of another view of the HMD 100 shown in FIG. 1, in accordance with one or more embodiments. In the embodiment shown in FIG. 2, the front rigid body 130 include an electronic display 200, camera assembly 210, and camera assembly 220. Camera assemblies 210 and 220 each include one or more sensors located outside the direct line of sight of a user wearing the HMD 100. To capture images of the face of a user, the sensors detect light reflected off the user, and at least some of the detected light may have originated from the electronic display 200.” Further, Bardagjy teaches col.5, lines 34-57 “In some embodiments, the data capture module 330 transmits instructions to the electronic display 200 to sequentially illuminate the face with illumination light having monochromatic light of different colors, e.g., red, green, blue, etc” ; col.6, lines 42-57 “The data processing module 350 determines facial data by processing images from the data capture module 330. The data processing module 350 may process an image using corresponding attributes of light (emitted at least in part by the electronic display 200) that illuminated a face of a user captured in the image. Facial data may include, e.g., color representations, facial depth data, or other types of information describing faces of users. In some embodiments, the data processing module 350 determines a color representation of the face by aggregating captured images of a user's face illuminated by monochromatic light having different colors, e.g., where each image corresponds to a different one of the colors. The color representation may describe a skin tone color of the user, and the color representation may vary between different portions of the face to provide a more accurate representation of the face of the user in real life.” where aggregating captured images of a user’s face having different color where each image corresponds to a different one of the colors which is considered as combining the first, the second and third image. Examiner also found new reference Imagawa teaches capturing three images (red , green and blue) based on a reflection of color component, see at least [0209] In the above-described examples, a visible light image and a far-infrared light image which are captured substantially at the same time are combined to conduct learning and recognition processes. The number of combined images is not limited to two. A color image may be used as a visible light image instead of a luminance image. In this case, when a color image is represented by red (R), green (G), and blue (B) images (representing intensities in three different wavelength bands emitted or reflected from a target object) Therefore, the combination of Bardagjy, Massoubre , Xu and Imagawa teaches all limitations of independent claims. Claim Objections Claims 1, 12, 20 are objected to because of the following informalities: Claim 1 recites “second image, third image” in line 17 , claim 12 recites “second image, third image” in line 23, claim 20 “second image, third image” in line 18. It should be “the second image, the third image”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9 and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites the limitation "a first" in line 3, “a second” in line 5, “a third” in line 7. It is unclear if “a first image”, “a second image”. “a third image” refer to “a first image”, “a second image”. “a third image” in claim 1 or something else. Claim 19 recites the limitation "a first" in line 3, “a second” in line 5, “a third” in line 7. It is unclear if “a first image”, “a second image”. “a third image” refer to “a first image”, “a second image”. “a third image” in claim 12 or something else. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 1. Claims 1-6, 8-9, 12-17, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bardagjy et al, U.S Patent No.10,248842 (“Bardagjy”) in view of Massoubre et al, U.S Patent No. 10,840418 (“Massoubre”) further in view of Xu, U.S Patent Application Publication No.2019/0065845 (“Xu”) further in view of Imagawa et al, U.S Patent Application Publication No.20020051578 (“Imagawa”) Regarding independent claim 1, Bardagjy teaches a method for generating a realistic avatar of a user, comprising, by an electronic device (see at least Fig.5, col.7, lines 20-27 “The facial model module 360 generates facial models of users of the HMD 100 using facial data determined by the data processing module 350. In some embodiments, the facial model module 360 uses color representation and/or depth map of a user's face to generate a virtual face of a user wearing the HMD 100. Furthermore, the facial model module 360 may use the virtual face to generate or update an avatar of the user.”): displaying, by at least one display of the electronic device, a sequence of frames to a user of the electronic device, wherein the sequence of frames comprises a first frame including a first color component, a second frame including a second color component, and a third frame including a third color component (see at least col.5, lines 54-67-col.6,lines 1-20 “In some embodiments, the data capture module 330 transmits instructions to the electronic display 200 to sequentially illuminate the face with illumination light having monochromatic light of different colors, e.g., red, green, blue, etc. The sequence may be based on a predetermined order or pattern of repeating multiple colors. To avoid distracting the user, the electronic display 200 may cycle through a sequence of colors at a higher frame rate that is not perceivable by the human eye or difficult to perceive by the human eye. The higher frame rate may be greater than a lower frame rate at which the electronic display 200 displays other content to the user. In some embodiments, the electronic display 200 may emit illumination light having monochromatic light and/or structured light patterns at a different light intensity than that of other content displayed to the user. The electronic display 200 may emit illumination light in a different frame (e.g., period of time) than when other light is emitted for displaying content to the user. For example, the electronic display 200 displays content during a content frame and emits the illumination light during a projection frame. The projection frame may be between content frames, and different frames may vary in duration of time. In an embodiment, the data capture module 330 sends instructions for the electronic display 200 to emit the illumination light embedded with other light for displaying content. In particular, the illumination light may be embedded in a video that is presented before, during, or after certain content of an application of the HMD 100, e.g., during an initial period while the application is loading. Additionally, the illumination light may be embedded in a video such that the illumination light is periodically emitted at a given time interval, e.g., for updating facial data. In some embodiments, the illumination light is emitted for a period of time that is too short for a human eye to perceive.”; col.8, lines 53-67-col.9,lines 1-3 “The facial tracking system 300 instructs 510 a display element (e.g., pixels of the electronic display 200 of FIG. 2 or electronic display 400 of FIG. 4) to display content to a user and to illuminate a portion of a face of the user. The display element may be part of a HMD (e.g., HMD 100 of FIG. 1) worn by the user, where the portion of the face is inside the HMD. In some embodiments, responsive to one or more instructions from the facial tracking system 300, the display element illuminates the face with monochromatic light (and/or structured light) between different content frames. For example, the display element displays the content to the user for a content frame having a first time period. The display element emits monochromatic light for a second time period after the first time period has elapsed, and prior to display of additional content for a subsequent content frame. In other embodiments, the display element illuminates the face with monochromatic light simultaneously with displaying the content for a content frame (e.g., embedded into an image or video).” ); while displaying the sequence of frames to the user, capturing, by one or more cameras of the electronic device, a plurality of images of the user (see at least col.2, lines 46-54 “the electronic display may illuminate the face using structured light having a pattern of monochrome colors (e.g., red, green, blue, etc.). The light may be emitted simultaneously or separately from the display of other by the electronic display. A controller processes images of the face captured by a camera assembly to determine facial data (e.g., depth information or color information), which may be used to update a virtual avatar of the user” where monochrome color at least three colors: red, green and blue”; col.5, lines 1-26 “The data capture module 330 receives images captured by the camera assemblies 310. The data capture module 330 transmits instructions to the electronic display 200 to illuminate portions of a face of the user inside the HMD 100. The data capture module 330 may generate an instruction based on a particular type of illumination. For example, an instruction for flood illumination may indicate that all pixels emit a certain color(s). As another example, an instruction may indicate a given type of structured light pattern, which may or may not be interleaved with content frames. In some embodiments, the data capture module 330 generates an instruction for one or more pixels to illuminate portions of the face for a period of time that is too short for the user to perceive (e.g., by a human eye). The data capture module 330 also transmits instructions to one or more camera assemblies 310 to capture one or more image frames of the illuminated portions of the face. The facial tracking system 300 may use the captured images for a calibration process to determine features of the user's face or to update a facial model of the user. Accordingly, light emitted by the electronic display 200 to illuminate the face for image capture (in contrast to light for displaying other content by the HMD 100) may also be referred to as “illumination light.” The data capture module 330 may store captured images in the facial data store 340 and/or any other database on or off of the HMD 100 that the facial tracking system 300 can access.”; col.10,lines 34-49 “The facial tracking system 400 captures images of an illuminated portion of a face of a user wearing the HMD 605. The facial tracking system 400 illuminates the face using illumination light, which may include monochrome light and/or a pattern of structured light. In addition, the facial tracking system 400 may emit the illumination light simultaneously or separately from presentation of other content using the electronic display assembly 635 of the HMD 605. For example, a subset of subpixels of a display element of the electronic display assembly 635 emits the illumination light, and the remaining subpixels emit light for content frames. By the processing the captured images, the facial tracking system 400 can determine facial data such as color representations (e.g., skin tone) or depth data of the face of the user, which may be used to generate a virtual model and/or animations of the face for an avatar.”), the plurality of images comprising at least a first image based on the first color component on the user, a second image based on the second color component on the user, and a third image based on the third color component on the user (see at least col.3, lines 65-67-col.4, lines 1-14“ FIG. 2 is a wire diagram of another view of the HMD 100 shown in FIG. 1, in accordance with one or more embodiments. In the embodiment shown in FIG. 2, the front rigid body 130 include an electronic display 200, camera assembly 210, and camera assembly 220. Camera assemblies 210 and 220 each include one or more sensors located outside the direct line of sight of a user wearing the HMD 100. To capture images of the face of a user, the sensors detect light reflected off the user, and at least some of the detected light may have originated from the electronic display 200. The camera assembly 210 is located on the left side of the front rigid body 130, and the camera assembly 220 is located on the right side of the front rigid body 130 from the perspective of the user. In other embodiments, the HMD 100 may include any number of camera assemblies, which may be positioned at different locations within the HMD 100.”); determining, for each of the plurality of images of the user, a visible wavelength for each of the first color component, the second color component, and the third color component (see at least col.6,lines 42-67-col.7, lines 1-19 “The data processing module 350 determines facial data by processing images from the data capture module 330. The data processing module 350 may process an image using corresponding attributes of light (emitted at least in part by the electronic display 200) that illuminated a face of a user captured in the image. Facial data may include, e.g., color representations, facial depth data, or other types of information describing faces of users. In some embodiments, the data processing module 350 determines a color representation of the face by aggregating captured images of a user's face illuminated by monochromatic light having different colors, e.g., where each image corresponds to a different one of the colors. The color representation may describe a skin tone color of the user, and the color representation may vary between different portions of the face to provide a more accurate representation of the face of the user in real life. In some embodiments, the data processing module 350 determines facial depth data of the face of the user by using captured images of the face illuminated by structured light or monochromatic light having different colors. The data processing module 350 may generate a depth map of the face by processing the images with a known pattern of the structured light, or a known pattern of the monochromatic light. In an example use case where the structured light pattern includes parallel lines in 2D, the structured light emitted by the electronic display 200 becomes distorted on the face because the face has 3D features (e.g., the noise protrudes from the surface of the face). The camera assemblies 310 may capture these distortions from multiple angles (e.g., the left and right sides of the HMD 100 as shown in FIG. 2). Thus, the data processing module 350 may use triangulation or other mapping techniques to determine distances (e.g., depths) between the camera assemblies 310 and particular points on the face in a 3D coordinate system. By aggregating the distances, the data processing module 350 generates the depth map that describes the user's facial features, e.g., a contour of the user's noise, mouth, eyes, cheek, edge of the face, etc., in 3D space. The resolution of the depth map may be based on the resolution of the corresponding structured light pattern emitted by the electronic display 200. In some embodiments, the data processing module 350 aggregates images captured over a duration of time to generate a depth map that is a spatiotemporal model. The spatiotemporal model describes changes in the captured facial features, e.g., indicating facial expressions.” Where aggregating captured images of a user's face illuminated by monochromatic light having different colors, e.g., ); and generating a realistic avatar of the user based on the visible wavelength for each of the first color component, the second color component, and the third color component by combining the first image, second image, and third image (see at least col.6,lines 42-67-col.7, lines 1-38 “The data processing module 350 determines facial data by processing images from the data capture module 330. The data processing module 350 may process an image using corresponding attributes of light (emitted at least in part by the electronic display 200) that illuminated a face of a user captured in the image. Facial data may include, e.g., color representations, facial depth data, or other types of information describing faces of users. In some embodiments, the data processing module 350 determines a color representation of the face by aggregating captured images of a user's face illuminated by monochromatic light having different colors, e.g., where each image corresponds to a different one of the colors. The color representation may describe a skin tone color of the user, and the color representation may vary between different portions of the face to provide a more accurate representation of the face of the user in real life.The data processing module 350 may generate a depth map of the face by processing the images with a known pattern of the structured light, or a known pattern of the monochromatic light. In an example use case where the structured light pattern includes parallel lines in 2D, the structured light emitted by the electronic display 200 becomes distorted on the face because the face has 3D features (e.g., the noise protrudes from the surface of the face). The camera assemblies 310 may capture these distortions from multiple angles (e.g., the left and right sides of the HMD 100 as shown in FIG. 2). Thus, the data processing module 350 may use triangulation or other mapping techniques to determine distances (e.g., depths) between the camera assemblies 310 and particular points on the face in a 3D coordinate system. By aggregating the distances, the data processing module 350 generates the depth map that describes the user's facial features, e.g., a contour of the user's noise, mouth, eyes, cheek, edge of the face, etc., in 3D space. The resolution of the depth map may be based on the resolution of the corresponding structured light pattern emitted by the electronic display 200. In some embodiments, the data processing module 350 aggregates images captured over a duration of time to generate a depth map that is a spatiotemporal model. The spatiotemporal model describes changes in the captured facial features, e.g., indicating facial expressions.The facial model module 360 generates facial models of users of the HMD 100 using facial data determined by the data processing module 350. In some embodiments, the facial model module 360 uses color representation and/or depth map of a user's face to generate a virtual face of a user wearing the HMD 100. Furthermore, the facial model module 360 may use the virtual face to generate or update an avatar of the user. By customizing the avatar to mirror the user's facial features and skin tone color, the avatar helps provide an immersive VR/AR/MR experience for the user. In addition, the facial model module 360 may determine facial expressions (e.g., smiling, frowning, winking, talking, etc.) and update the facial model or avatar to including animations that reflect the facial expressions. The facial model module 360 may provide the avatar or other content generated based on the facial model to the electronic display 200 of the HMD 100 for presentation to the user. The facial tracking system 300 may also store the facial model in the facial data store 340 for future use.”; col.9, lines 8-18 “The facial tracking system 300 updates 530 a facial model that describes the portion of the face based at least in part on the captured images. In some embodiments, the facial tracking system 300 determines a color representation of the portion of the face using the captured images, and updates the facial model with the color representation. In some embodiments, the facial tracking system 300 determines facial depth data using a captured image of the face illuminated by a structured light pattern. Furthermore, the facial tracking system 300 may update a virtual face of an avatar of the user using the facial model of the user.” aggregating captured images of a user's face illuminated by monochromatic light having different colors, e.g., where each image corresponds to a different one of the colors where each image corresponds to a different one of the colors is considered as red, blue and green color), wherein generating the realistic avatar includes temporarily removing an infrared bandpass filter (col.8,lines 17-38 (In an embodiment, the optics block 430 includes one or more optical elements and/or combinations of different optical elements. For example, an optical element is an aperture, a Fresnel lens, a convex lens, a concave lens, a filter, or any other suitable optical element that affects the image light emitted from the electronic display 200. In some embodiments, one or more of the optical elements in the optics block 430 may have one or more coatings, such as anti-reflective coatings. Magnification of the light by the optics block 430 allows the electronic display 200 to be physically smaller, weigh less, and consume less power than larger displays. Additionally, magnification may increase a field of view of the displayed content. For example, the field of view of the displayed content is such that the displayed content is presented using almost all (e.g., 110 degrees diagonal), and in some cases all, of the user's field of view. In some embodiments, the optics block 430 is designed so its effective focal length is larger than the spacing to the electronic display 200, which magnifies the image light projected by the electronic display 200. Additionally, in some embodiments, the amount of magnification is adjusted by adding or removing optical elements”). Bardagjy is understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, Massoubre teaches displaying, by at least one display of the electronic device, a sequence of frames to a user of the electronic device, wherein the sequence of frames comprises a first frame including a first color component, a second frame including a second color component, and a third frame including a third color component (see at least col.20,lines 57-67-col.21, lines 1-23 “One or more chromatic filters, which may facilitate a simplified projection lens design with reduced achromatic performance requirements, may be employed to further narrow the wavelength range of an emitter array. In some embodiments, the emitter array 254A may include only red light-emitting components, the emitter array 254B may include only green light-emitting components, and the emitter array 254C may include only blue light-emitting components. Under the direction of controller 202, each of the emitter arrays 254A-254C may produce a monochromatic 2D image according to the color produced by its respective emitters. Accordingly, the three monochromatic emitter arrays 254A-254C may simultaneously emit three monochromatic images (e.g., a red image, a green image, and a blue image composed of image light) towards optics system 234. One or more chromatic filters, which may facilitate a simplified projection lens design with reduced achromatic performance requirements, may be employed to further narrow the wavelength range of an emitter array. In some embodiments, the emitter array 254A may include only red light-emitting components, the emitter array 254B may include only green light-emitting components, and the emitter array 254C may include only blue light-emitting components. Under the direction of controller 202, each of the emitter arrays 254A-254C may produce a monochromatic 2D image according to the color produced by its respective emitters. Accordingly, the three monochromatic emitter arrays 254A-254C may simultaneously emit three monochromatic images (e.g., a red image, a green image, and a blue image composed of image light) towards optics system 234. As discussed elsewhere, the three monochromatic images may be interposed, composited, or otherwise combined to generate a full color image. For example, the controller 202 may receive a full-color image to be displayed to a user and then decompose the full-color image into multiple monochromatic images, such as a red image, a green image, and a blue image. That is, the full-color image may be separated, or otherwise decomposed into three monochromatic images of primary colors. As described herein, the waveguide configuration 106 of FIG. 1B and FIGS. 2A-2B may combine (or recombine) the three monochromatic images to produce a full-color image or a poly-chromatic (or multi-chromatic) image, via post-waveguide image light 204 and directed toward the eye 110 of FIG. 1B and FIGS. 2A-2B. In yet other examples, one or more emitter arrays 254A-254C may produce light of multiple wavelengths, ranges of wavelengths, or other forms of light other than monochromatic light.”); determining, a visible wavelength for each of the first color component, the second color component, and the third color component (see at least col.20, lines 26-39 “Each of the emitter arrays 254 may be a monochromatic emitter array having a 1D or 2D configuration of individual emitters (e.g., LEDs) of a single color. As described herein, a green colored light may be understood as light composed of photons with a range of wavelengths between about 500 nanometers (nm) to about 555 nm. Furthermore, as described herein, red colored light may be understood as light composed of photons with a range of wavelengths between about 622 nm to about 780 nm. Blue colored light may be understood as light composed of photons with a range of wavelengths between about 440 nm to about 492 nm. A monochromatic emitter array 254 may emit light within a narrow wavelength range, rather than a single wavelength, in some embodiments. For example, a monochromatic emitter array 254 may emit colored light (e.g., red, green, or blue photons) within a narrow wavelength range of 5-10 nm in width.” where Massoubre teaches three colors: red, blue and green, each color has wavelengths) Therefore, it would have been obvious to one of ordinary kill in the art before the effective filling date of the claimed invention to modify the method of updating the facial model with the color representation of the portion of the face using the captured images of Bardagjy with determining each color’s wavelengths as seen in Massoubre because this modification would achieve the expected benefits of providing colored light with a range of wavelengths. Both Bardagjy and Massoubre are silent on the remaining limitations of claim 1. In the same field of endeavor, Xu teaches temporarily removing an infrared bandpass filter of one or more of the cameras (see at least [0049] Incident light from the multispectral light source passes through the lens assembly to arrive at the optical filter assembly 120. The optical filter assembly is a design of a group of bandpass optical filters for different regions and different wavebands, and it includes a visible light bandpass region that only allows passage of light of the visible light waveband and an infrared light bandpass region that only allows passage of light of the infrared light waveband. Preferably, as shown in FIG. 2b, the visible light bandpass region and the infrared light bandpass region can be a visible light bandpass filter 121 and an infrared light bandpass filter 122, respectively. Then, the incident multispectral light source is split into light in two wavebands and is received by the image sensor. Preferably, the visible light bandpass filter 121 in the optical filter assembly 120 has a coating that can facilitate reflecting of infrared light wavebands and transmitting of visible light wavebands, and the infrared light bandpass filter 122 has a coating that can facilitate transmitting of infrared light wavebands and reflecting of visible light wavebands [0066] Specifically, the user switches the camera to enter into the visible light imaging mode or the infrared light imaging mode through software control. The image sensor (CMOS/CCD) chip includes the visible light imaging region and the infrared light imaging region according to the corresponding design specifications and area sizes of the visible light bandpass filter 121 and infrared light bandpass filter 122 in the optical filter assembly. Under the visible light imaging mode, the software controls the image signal processor (ISP) to select the corresponding visible light imaging region for operation and call the corresponding ISP parameter settings for visible light imaging so as to optimize the effect of visible light imaging. In particular, with respect to iris recognition, since there is an active infrared illumination and the illumination light source is stable, the ISP parameters need to be modified to reduce the gain of the image sensor CMOS, increase the contrast of the image sensor CMOS, reduce the noise of the image sensor CMOS, and increase the signal-to-noise ratio of the image sensor CMOS, thereby facilitating improving of the iris imaging quality. If the module has a zoom function, the micro-motor actuator can be used to control the motion component to move the lens assembly to enter into the visible light focus mode. Autofocus is achieved by a conventional focusing method (e.g. contrast focusing) that is based on image quality evaluation, and images with a resolution size of positions corresponding to the visible light imaging region and the output format thereof are output. If it is under the infrared light imaging mode, the ISP selects the corresponding infrared light imaging region for operation, and calls the corresponding ISP parameter settings for infrared light imaging so as to optimize the effect of infrared light imaging. If the module has a zoom function, the micro-motor actuator can be used to control the motion component to move the lens assembly to enter into the infrared light focus mode, meanwhile, images with a resolution size of positions corresponding to the infrared light imaging region and the output format thereof are output.”) Therefore, in combination of Bardagjy and Massoubre, it would have been obvious to one of ordinary kill in the art before the effective filling date of the claimed invention to modify the method of updating the facial model with the color representation of the portion of the face using the captured images of Bardagjy with switching between the visible light imaging mode and the infrared light imaging mode as seen in Xu because this modification would either only allow passage of light of the visible light waveband or only allow passage of light of the infrared light waveband ([0049] of Xu) Bardagjy, Massoubre , Xu are understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, Imagawa teaches capturing, by one or more cameras of the electronic device, a plurality of images of the user, the plurality of images comprising at least a first image based on a reflection of the first color component on the user, a second image based on a reflection of the second color component on the user, and a third image based on a reflection of the third color component on the user (see at least [0209] In the above-described examples, a visible light image and a far-infrared light image which are captured substantially at the same time are combined to conduct learning and recognition processes. The number of combined images is not limited to two. A color image may be used as a visible light image instead of a luminance image. In this case, when a color image is represented by red (R), green (G), and blue (B) images (representing intensities in three different wavelength bands emitted or reflected from a target object), four images, i.e., the three R, G, and B images and one far-infrared light image, are input to the object recognition apparatus 1 as an image set (an image set to be learned and an image set to be recognized). When four images are input, the learning and recognition processes are similar to those when two images, i.e., a visible light image and a far-infrared light image are input.”) Therefore, in combination of Bardagjy, Massoubre and Xu, it would have been obvious to one of ordinary kill in the art before the effective filling date of the claimed invention to modify the method of updating the facial model with the color representation of the portion of the face using the captured images of Bardagjy with captured red, green, blue images as seen in Imagawa because this modification would represent intensities in three different wavelength bands reflected from a target object ( [0209] of Imagawa) Thus, the combination of Bardagjy, Massoubre , Xu and Imagawa teaches a method for generating a realistic avatar of a user, comprising, by an electronic device: displaying, by at least one display of the electronic device, a sequence of frames to a user of the electronic device, wherein the sequence of frames comprises a first frame including a first color component, a second frame including a second color component, and a third frame including a third color component; while displaying the sequence of frames to the user, capturing, by one or more cameras of the electronic device, a plurality of images of the user, the plurality of images comprising at least a first image based on a reflection of the first color component on the user, a second image based on a reflection of the second color component on the user, and a third image based on a reflection of the third color component on the user; determining, for each of the plurality of images of the user, a visible wavelength for each of the first color component, the second color component, and the third color component; and generating a realistic avatar of the user based on the visible wavelength for each of the first color component, the second color component, and the third color component by combining the first image, second image, and third image, wherein generating the realistic avatar includes temporarily removing an infrared bandpass filter of one or more of the cameras. Regarding claim 2, Bardagjy, Massoubre , Xu and Imagawa teach the method of Claim 1, wherein: the first color component comprises a red color component; the second color component comprises a green color component; and the third color component comprises a blue color component (see at least col.5, lines 54-67-col.6,lines 1-20 of Bardagjy “In some embodiments, the data capture module 330 transmits instructions to the electronic display 200 to sequentially illuminate the face with illumination light having monochromatic light of different colors, e.g., red, green, blue”; col.20,lines 57-67-col.21, lines 1-23 of Massoubre “One or more chromatic filters, which may facilitate a simplified projection lens design with reduced achromatic performance requirements, may be employed to further narrow the wavelength range of an emitter array. In some embodiments, the emitter array 254A may include only red light-emitting components, the emitter array 254B may include only green light-emitting components, and the emitter array 254C may include only blue light-emitting components. Under the direction of controller 202, each of the emitter arrays 254A-254C may produce a monochromatic 2D image according to the color produced by its respective emitters. Accordingly, the three monochromatic emitter arrays 254A-254C may simultaneously emit three monochromatic images (e.g., a red image, a green image, and a blue image composed of image light) towards optics system 234. One or more chromatic filters, which may facilitate a simplified projection lens design with reduced achromatic performance requirements, may be employed to further narrow the wavelength range of an emitter array. In some embodiments, the emitter array 254A may include only red light-emitting components, the emitter array 254B may include only green light-emitting components, and the emitter array 254C may include only blue light-emitting components. Under the direction of controller 202, each of the emitter arrays 254A-254C may produce a monochromatic 2D image according to the color produced by its respective emitters. Accordingly, the three monochromatic emitter arrays 254A-254C may simultaneously emit three monochromatic images (e.g., a red image, a green image, and a blue image composed of image light) towards optics system 234.”; [0209] of Imagawa “In the above-described examples, a visible light image and a far-infrared light image which are captured substantially at the same time are combined to conduct learning and recognition processes. The number of combined images is not limited to two. A color image may be used as a visible light image instead of a luminance image. In this case, when a color image is represented by red (R), green (G), and blue (B) images (representing intensities in three different wavelength bands emitted or reflected from a target object), four images, i.e., the three R, G, and B images and one far-infrared light image, are input to the object recognition apparatus 1 as an image set (an image set to be learned and an image set to be recognized). When four images are input, the learning and recognition processes are similar to those when two images, i.e., a visible light image and a far-infrared light image are input.”) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 3, Bardagjy, Massoubre , Xu and Imagawa teach the method of Claim 2, wherein: the first frame includes only the red color component; the second frame includes only the green color component; and third frame includes only the blue color component(see at least col.5, lines 54-67-col.6,lines 1-20 of Bardagjy “In some embodiments, the data capture module 330 transmits instructions to the electronic display 200 to sequentially illuminate the face with illumination light having monochromatic light of different colors, e.g., red, green, blue”; col.20,lines 57-67-col.21, lines 1-23 of Massoubre “One or more chromatic filters, which may facilitate a simplified projection lens design with reduced achromatic performance requirements, may be employed to further narrow the wavelength range of an emitter array. In some embodiments, the emitter array 254A may include only red light-emitting components, the emitter array 254B may include only green light-emitting components, and the emitter array 254C may include only blue light-emitting components. Under the direction of controller 202, each of the emitter arrays 254A-254C may produce a monochromatic 2D image according to the color produced by its respective emitters. Accordingly, the three monochromatic emitter arrays 254A-254C may simultaneously emit three monochromatic images (e.g., a red image, a green image, and a blue image composed of image light) towards optics system 234. One or more chromatic filters, which may facilitate a simplified projection lens design with reduced achromatic performance requirements, may be employed to further narrow the wavelength range of an emitter array. In some embodiments, the emitter array 254A may include only red light-emitting components, the emitter array 254B may include only green light-emitting components, and the emitter array 254C may include only blue light-emitting components. Under the direction of controller 202, each of the emitter arrays 254A-254C may produce a monochromatic 2D image according to the color produced by its respective emitters. Accordingly, the three monochromatic emitter arrays 254A-254C may simultaneously emit three monochromatic images (e.g., a red image, a green image, and a blue image composed of image light) towards optics system 234.”) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 4, Bardagjy, Massoubre , Xu and Imagawa teach the method of Claim 1, wherein determining, for each of the plurality of images of the user, the visible wavelengths comprises determining a color for each of a plurality of characteristics of the user (see at least col.6,lines 42-67-col.7, lines 1-19 of Bardagjy “The data processing module 350 determines facial data by processing images from the data capture module 330. The data processing module 350 may process an image using corresponding attributes of light (emitted at least in part by the electronic display 200) that illuminated a face of a user captured in the image. Facial data may include, e.g., color representations, facial depth data, or other types of information describing faces of users. In some embodiments, the data processing module 350 determines a color representation of the face by aggregating captured images of a user's face illuminated by monochromatic light having different colors, e.g., where each image corresponds to a different one of the colors. The color representation may describe a skin tone color of the user, and the color representation may vary between different portions of the face to provide a more accurate representation of the face of the user in real life. In some embodiments, the data processing module 350 determines facial depth data of the face of the user by using captured images of the face illuminated by structured light or monochromatic light having different colors. The data processing module 350 may generate a depth map of the face by processing the images with a known pattern of the structured light, or a known pattern of the monochromatic light. In an example use case where the structured light pattern includes parallel lines in 2D, the structured light emitted by the electronic display 200 becomes distorted on the face because the face has 3D features (e.g., the noise protrudes from the surface of the face). The camera assemblies 310 may capture these distortions from multiple angles (e.g., the left and right sides of the HMD 100 as shown in FIG. 2). Thus, the data processing module 350 may use triangulation or other mapping techniques to determine distances (e.g., depths) between the camera assemblies 310 and particular points on the face in a 3D coordinate system. By aggregating the distances, the data processing module 350 generates the depth map that describes the user's facial features, e.g., a contour of the user's noise, mouth, eyes, cheek, edge of the face, etc., in 3D space. The resolution of the depth map may be based on the resolution of the corresponding structured light pattern emitted by the electronic display 200. In some embodiments, the data processing module 350 aggregates images captured over a duration of time to generate a depth map that is a spatiotemporal model. The spatiotemporal model describes changes in the captured facial features, e.g., indicating facial expressions.”) Regarding claim 5, Bardagjy, Massoubre , Xu and Imagawa teach the method of Claim 4, wherein the plurality of characteristics of the user comprises one or more of an eye color of the user, a head hair color of the user, a skin complexion of the user, a facial hair color of the user, or a cheek complexion of the user. (see at least col.6,lines 42-67-col.7, lines 1-19 of Bardagjy “The data processing module 350 determines facial data by processing images from the data capture module 330. The data processing module 350 may process an image using corresponding attributes of light (emitted at least in part by the electronic display 200) that illuminated a face of a user captured in the image. Facial data may include, e.g., color representations, facial depth data, or other types of information describing faces of users. In some embodiments, the data processing module 350 determines a color representation of the face by aggregating captured images of a user's face illuminated by monochromatic light having different colors, e.g., where each image corresponds to a different one of the colors. The color representation may describe a skin tone color of the user, and the color representation may vary between different portions of the face to provide a more accurate representation of the face of the user in real life. In some embodiments, the data processing module 350 determines facial depth data of the face of the user by using captured images of the face illuminated by structured light or monochromatic light having different colors. The data processing module 350 may generate a depth map of the face by processing the images with a known pattern of the structured light, or a known pattern of the monochromatic light. In an example use case where the structured light pattern includes parallel lines in 2D, the structured light emitted by the electronic display 200 becomes distorted on the face because the face has 3D features (e.g., the noise protrudes from the surface of the face). The camera assemblies 310 may capture these distortions from multiple angles (e.g., the left and right sides of the HMD 100 as shown in FIG. 2). Thus, the data processing module 350 may use triangulation or other mapping techniques to determine distances (e.g., depths) between the camera assemblies 310 and particular points on the face in a 3D coordinate system. By aggregating the distances, the data processing module 350 generates the depth map that describes the user's facial features, e.g., a contour of the user's noise, mouth, eyes, cheek, edge of the face, etc., in 3D space. The resolution of the depth map may be based on the resolution of the corresponding structured light pattern emitted by the electronic display 200. In some embodiments, the data processing module 350 aggregates images captured over a duration of time to generate a depth map that is a spatiotemporal model. The spatiotemporal model describes changes in the captured facial features, e.g., indicating facial expressions.”) Regarding claim 6, Bardagjy, Massoubre , Xu and Imagawa teach the method of Claim 1, wherein generating the realistic avatar of the user comprises combining the plurality of images of the user based on the visible wavelength for each of the first color component, the second color component, and the third color component (see at least col. 5, lines 26-67-col.6, lines 1-20 of Bardagjy “In an embodiment, the data capture module 330 may coordinate the instructions to the electronic display 200 and camera assemblies 310. For instance, responsive to an instruction, the electronic display 200 emits illumination light for a period of time. The illumination light is for illuminating a portion of a face of a user for image capture. The illumination light may include one or more types of a broad range of light, for example, light having a certain monochrome color, a pattern of structured light, some other type of light, or some combination thereof. Responsive to another instruction, the data capture module 330 captures an image the user's face illuminated by the illumination light during the same period of time. Thus, the data capture module 330 may associate captured images with attributes of the illumination light (e.g., the certain monochrome color and/or pattern of structured light) that illuminated the face. Structured light patterns may be in color, in grayscale, or monochromatic, and include, e.g., strips, checkerboards, circles/ellipses, binary codes, dot arrays, speckle, among other types of patterns. In some embodiments, the illumination light is non-visible to a human eye (e.g., infrared light). Thus, the electronic display 200 may emit illumination light for longer periods of time without disrupting a user's perception of other content displayed by the electronic display 200. For instance, the electronic display 200 may emit illumination light whenever the HMD 100 is turned on or executing an application. In some embodiments, the data capture module 330 transmits instructions to the electronic display 200 to sequentially illuminate the face with illumination light having monochromatic light of different colors, e.g., red, green, blue, etc. The sequence may be based on a predetermined order or pattern of repeating multiple colors. To avoid distracting the user, the electronic display 200 may cycle through a sequence of colors at a higher frame rate that is not perceivable by the human eye or difficult to perceive by the human eye. The higher frame rate may be greater than a lower frame rate at which the electronic display 200 displays other content to the user. In some embodiments, the electronic display 200 may emit illumination light having monochromatic light and/or structured light patterns at a different light intensity than that of other content displayed to the user. The electronic display 200 may emit illumination light in a different frame (e.g., period of time) than when other light is emitted for displaying content to the user. For example, the electronic display 200 displays content during a content frame and emits the illumination light during a projection frame. The projection frame may be between content frames, and different frames may vary in duration of time. In an embodiment, the data capture module 330 sends instructions for the electronic display 200 to emit the illumination light embedded with other light for displaying content. In particular, the illumination light may be embedded in a video that is presented before, during, or after certain content of an application of the HMD 100, e.g., during an initial period while the application is loading. Additionally, the illumination light may be embedded in a video such that the illumination light is periodically emitted at a given time interval, e.g., for updating facial data. In some embodiments, the illumination light is emitted for a period of time that is too short for a human eye to perceive.”; col. 6, lines 58-67-col.7, lines 1-29” In some embodiments, the data processing module 350 determines facial depth data of the face of the user by using captured images of the face illuminated by structured light or monochromatic light having different colors. The data processing module 350 may generate a depth map of the face by processing the images with a known pattern of the structured light, or a known pattern of the monochromatic light. In an example use case where the structured light pattern includes parallel lines in 2D, the structured light emitted by the electronic display 200 becomes distorted on the face because the face has 3D features (e.g., the noise protrudes from the surface of the face). The camera assemblies 310 may capture these distortions from multiple angles (e.g., the left and right sides of the HMD 100 as shown in FIG. 2). Thus, the data processing module 350 may use triangulation or other mapping techniques to determine distances (e.g., depths) between the camera assemblies 310 and particular points on the face in a 3D coordinate system. By aggregating the distances, the data processing module 350 generates the depth map that describes the user's facial features, e.g., a contour of the user's noise, mouth, eyes, cheek, edge of the face, etc., in 3D space. The resolution of the depth map may be based on the resolution of the corresponding structured light pattern emitted by the electronic display 200. In some embodiments, the data processing module 350 aggregates images captured over a duration of time to generate a depth map that is a spatiotemporal model. The spatiotemporal model describes changes in the captured facial features, e.g., indicating facial expressions.” Where aggregates images is considered as combining the plurality of images; see at least col.20,lines 57-67-col.21, lines 1-23 of Massourbe “One or more chromatic filters, which may facilitate a simplified projection lens design with reduced achromatic performance requirements, may be employed to further narrow the wavelength range of an emitter array. In some embodiments, the emitter array 254A may include only red light-emitting components, the emitter array 254B may include only green light-emitting components, and the emitter array 254C may include only blue light-emitting components. Under the direction of controller 202, each of the emitter arrays 254A-254C may produce a monochromatic 2D image according to the color produced by its respective emitters. Accordingly, the three monochromatic emitter arrays 254A-254C may simultaneously emit three monochromatic images (e.g., a red image, a green image, and a blue image composed of image light) towards optics system 234. One or more chromatic filters, which may facilitate a simplified projection lens design with reduced achromatic performance requirements, may be employed to further narrow the wavelength range of an emitter array. In some embodiments, the emitter array 254A may include only red light-emitting components, the emitter array 254B may include only green light-emitting components, and the emitter array 254C may include only blue light-emitting components. Under the direction of controller 202, each of the emitter arrays 254A-254C may produce a monochromatic 2D image according to the color produced by its respective emitters. Accordingly, the three monochromatic emitter arrays 254A-254C may simultaneously emit three monochromatic images (e.g., a red image, a green image, and a blue image composed of image light) towards optics system 234. As discussed elsewhere, the three monochromatic images may be interposed, composited, or otherwise combined to generate a full color image. For example, the controller 202 may receive a full-color image to be displayed to a user and then decompose the full-color image into multiple monochromatic images, such as a red image, a green image, and a blue image. That is, the full-color image may be separated, or otherwise decomposed into three monochromatic images of primary colors. As described herein, the waveguide configuration 106 of FIG. 1B and FIGS. 2A-2B may combine (or recombine) the three monochromatic images to produce a full-color image or a poly-chromatic (or multi-chromatic) image, via post-waveguide image light 204 and directed toward the eye 110 of FIG. 1B and FIGS. 2A-2B. In yet other examples, one or more emitter arrays 254A-254C may produce light of multiple wavelengths, ranges of wavelengths, or other forms of light other than monochromatic light.”; [0209] of Imagawa “In the above-described examples, a visible light image and a far-infrared light image which are captured substantially at the same time are combined to conduct learning and recognition processes. The number of combined images is not limited to two. A color image may be used as a visible light image instead of a luminance image. In this case, when a color image is represented by red (R), green (G), and blue (B) images (representing intensities in three different wavelength bands emitted or reflected from a target object), four images, i.e., the three R, G, and B images and one far-infrared light image, are input to the object recognition apparatus 1 as an image set (an image set to be learned and an image set to be recognized). When four images are input, the learning and recognition processes are similar to those when two images, i.e., a visible light image and a far-infrared light image are input.”) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 8, Bardagjy, Massoubre , Xu and Imagawa teach the method of Claim 1, wherein the plurality of light sources comprises a plurality of light-emitting diodes (LEDs) (see at least col.3, lines 37-52 of Bardagjy “The locators 155 are located in fixed positions on the front rigid body 130 relative to one another and relative to the reference point 150. Thus, the locators 155 can be used to determine positions of the reference point 150 and the HMD 100. As shown in FIG. 1, the locators 155, or portions of the locators 155, are located on a front side 120A, a top side 120B, a bottom side 120C, a right side 120D, and a left side 120E of the front rigid body 130. A locator 155 may be a light emitting diode (LED), a corner cube reflector, a reflective marker, a type of light source that contrasts with an environment in which the HMD 100 operates, or some combination thereof. In embodiments where the locators 155 are active (e.g., an LED or other type of light emitting device), the locators 155 may emit light in the visible band (˜380 nanometer (nm) to 750 nm), in the infrared (IR) band (˜750 nm to 1700 nm), in the ultraviolet band (10 nm to 380 nm), some other portion of the electromagnetic spectrum, or some combination thereof.”; col.16, lines 41-52 of Massoubre “Light source 232 includes a plurality of source elements, shown schematically as source elements 254A-254F. Source elements may include an array of light-emitting components (LECs), i.e., a source element may include and/or be an embodiment of an emitter array. Various embodiments of emitter arrays are discussed in conjunction with FIGS. 3-4. However, briefly here, an emitter array may be a 2D arrays of LECs, such as but not limited to light-emitting diodes (LEDs)”col.17, lines 51- 64 of “The individual source elements 254 of an emitter array may include one or more compact, efficient and/or powerful sources of lights, e.g., LECs with at least ultra-high brightness, low power consumption, and a low footprint. The source elements 254 may include one or more arrays of light-emitting components (LECs), such as but not limited to light-emitting diodes (LEDs), e.g., μLEDs, organic LEDs (OLEDs), a superluminescent LED (SLED), and organic μLEDs. AμLED may be an LED with features sizes ranging between sub-microns to a hundreds of microns. Various embodiments of μLEDs are discussed in conjunction with FIGS. 6A-6B. In some embodiments, GaN-based inorganic LEDs can be made orders of magnitude brighter than OLEDs with a light emission area of few microns.”) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 9, Bardagjy, Massoubre , Xu and Imagawa teach the method of Claim 1, wherein capturing, by the one or more cameras, the plurality of images of the user further comprises: capturing a first image of the plurality of images of the user while displaying the first frame including the first color component; capturing a second image of the plurality of images of the user while displaying the second frame including the second color component; and capturing a third image of the plurality of images of the user while displaying the third frame including the third color component (see at least col. 5, lines 26-67-col.6, lines 1-20 of Bardagjy “In an embodiment, the data capture module 330 may coordinate the instructions to the electronic display 200 and camera assemblies 310. For instance, responsive to an instruction, the electronic display 200 emits illumination light for a period of time. The illumination light is for illuminating a portion of a face of a user for image capture. The illumination light may include one or more types of a broad range of light, for example, light having a certain monochrome color, a pattern of structured light, some other type of light, or some combination thereof. Responsive to another instruction, the data capture module 330 captures an image the user's face illuminated by the illumination light during the same period of time. Thus, the data capture module 330 may associate captured images with attributes of the illumination light (e.g., the certain monochrome color and/or pattern of structured light) that illuminated the face. Structured light patterns may be in color, in grayscale, or monochromatic, and include, e.g., strips, checkerboards, circles/ellipses, binary codes, dot arrays, speckle, among other types of patterns. In some embodiments, the illumination light is non-visible to a human eye (e.g., infrared light). Thus, the electronic display 200 may emit illumination light for longer periods of time without disrupting a user's perception of other content displayed by the electronic display 200. For instance, the electronic display 200 may emit illumination light whenever the HMD 100 is turned on or executing an application. In some embodiments, the data capture module 330 transmits instructions to the electronic display 200 to sequentially illuminate the face with illumination light having monochromatic light of different colors, e.g., red, green, blue, etc. The sequence may be based on a predetermined order or pattern of repeating multiple colors. To avoid distracting the user, the electronic display 200 may cycle through a sequence of colors at a higher frame rate that is not perceivable by the human eye or difficult to perceive by the human eye. The higher frame rate may be greater than a lower frame rate at which the electronic display 200 displays other content to the user. In some embodiments, the electronic display 200 may emit illumination light having monochromatic light and/or structured light patterns at a different light intensity than that of other content displayed to the user. The electronic display 200 may emit illumination light in a different frame (e.g., period of time) than when other light is emitted for displaying content to the user. For example, the electronic display 200 displays content during a content frame and emits the illumination light during a projection frame. The projection frame may be between content frames, and different frames may vary in duration of time. In an embodiment, the data capture module 330 sends instructions for the electronic display 200 to emit the illumination light embedded with other light for displaying content. In particular, the illumination light may be embedded in a video that is presented before, during, or after certain content of an application of the HMD 100, e.g., during an initial period while the application is loading. Additionally, the illumination light may be embedded in a video such that the illumination light is periodically emitted at a given time interval, e.g., for updating facial data. In some embodiments, the illumination light is emitted for a period of time that is too short for a human eye to perceive.”; col. 6, lines 58-67-col.7, lines 1-29” In some embodiments, the data processing module 350 determines facial depth data of the face of the user by using captured images of the face illuminated by structured light or monochromatic light having different colors. The data processing module 350 may generate a depth map of the face by processing the images with a known pattern of the structured light, or a known pattern of the monochromatic light. In an example use case where the structured light pattern includes parallel lines in 2D, the structured light emitted by the electronic display 200 becomes distorted on the face because the face has 3D features (e.g., the noise protrudes from the surface of the face). The camera assemblies 310 may capture these distortions from multiple angles (e.g., the left and right sides of the HMD 100 as shown in FIG. 2). Thus, the data processing module 350 may use triangulation or other mapping techniques to determine distances (e.g., depths) between the camera assemblies 310 and particular points on the face in a 3D coordinate system. By aggregating the distances, the data processing module 350 generates the depth map that describes the user's facial features, e.g., a contour of the user's noise, mouth, eyes, cheek, edge of the face, etc., in 3D space. The resolution of the depth map may be based on the resolution of the corresponding structured light pattern emitted by the electronic display 200. In some embodiments, the data processing module 350 aggregates images captured over a duration of time to generate a depth map that is a spatiotemporal model. The spatiotemporal model describes changes in the captured facial features, e.g., indicating facial expressions.”; see at least col.20,lines 57-67-col.21, lines 1-23 of Massourbe “One or more chromatic filters, which may facilitate a simplified projection lens design with reduced achromatic performance requirements, may be employed to further narrow the wavelength range of an emitter array. In some embodiments, the emitter array 254A may include only red light-emitting components, the emitter array 254B may include only green light-emitting components, and the emitter array 254C may include only blue light-emitting components. Under the direction of controller 202, each of the emitter arrays 254A-254C may produce a monochromatic 2D image according to the color produced by its respective emitters. Accordingly, the three monochromatic emitter arrays 254A-254C may simultaneously emit three monochromatic images (e.g., a red image, a green image, and a blue image composed of image light) towards optics system 234. One or more chromatic filters, which may facilitate a simplified projection lens design with reduced achromatic performance requirements, may be employed to further narrow the wavelength range of an emitter array. In some embodiments, the emitter array 254A may include only red light-emitting components, the emitter array 254B may include only green light-emitting components, and the emitter array 254C may include only blue light-emitting components. Under the direction of controller 202, each of the emitter arrays 254A-254C may produce a monochromatic 2D image according to the color produced by its respective emitters. Accordingly, the three monochromatic emitter arrays 254A-254C may simultaneously emit three monochromatic images (e.g., a red image, a green image, and a blue image composed of image light) towards optics system 234. As discussed elsewhere, the three monochromatic images may be interposed, composited, or otherwise combined to generate a full color image. For example, the controller 202 may receive a full-color image to be displayed to a user and then decompose the full-color image into multiple monochromatic images, such as a red image, a green image, and a blue image. That is, the full-color image may be separated, or otherwise decomposed into three monochromatic images of primary colors. As described herein, the waveguide configuration 106 of FIG. 1B and FIGS. 2A-2B may combine (or recombine) the three monochromatic images to produce a full-color image or a poly-chromatic (or multi-chromatic) image, via post-waveguide image light 204 and directed toward the eye 110 of FIG. 1B and FIGS. 2A-2B. In yet other examples, one or more emitter arrays 254A-254C may produce light of multiple wavelengths, ranges of wavelengths, or other forms of light other than monochromatic light.”; [0209] of Imagawa “In the above-described examples, a visible light image and a far-infrared light image which are captured substantially at the same time are combined to conduct learning and recognition processes. The number of combined images is not limited to two. A color image may be used as a visible light image instead of a luminance image. In this case, when a color image is represented by red (R), green (G), and blue (B) images (representing intensities in three different wavelength bands emitted or reflected from a target object), four images, i.e., the three R, G, and B images and one far-infrared light image, are input to the object recognition apparatus 1 as an image set (an image set to be learned and an image set to be recognized). When four images are input, the learning and recognition processes are similar to those when two images, i.e., a visible light image and a far-infrared light image are input.) In addition, the same motivation is used as the rejection for claim 1. Regarding independent claim 12, Bardagjy teaches an electronic device (see at least Fig.6), comprising: at least one display; one or more cameras; one or more non-transitory computer-readable storage media including instructions; and one or more processors coupled to the storage media and the camera, the one or more processors configured to execute the instructions to(col.9, lines20-49, col.11, lines 37-40, “FIG. 6 is a HMD system 600 in accordance with one or more embodiments. The HMD system 600 may operate in an artificial reality environment. The HMD system 600 shown by FIG. 6 comprises a console 610 coupled to a HMD 605, an imaging device 620, and an input/output (I/O) interface 630. While FIG. 6 shows an example HMD system 600 including one HMD 605 and one input interface 630, in other embodiments any number of these components may be included in the HMD system 600. For example, there may be multiple HMDs, each having an associated input interface 630 and communicating with the HMD console 610. In alternative configurations, different and/or additional components may be included in the HMD system 600…The HMD 605 includes an electronic display assembly 635, the optics block 430, one or more locators 155, the position sensors 145, the internal measurement unit (IMU) 140, and the facial tracking system 400…The application store 640 stores one or more applications for execution by the console 610. An application is a group of instructions, that when executed by a processor, generates content for presentation to the user.”): Remaining limitations of claim 12 is similar scope to claim 1 and therefore rejected under the same rational. Regarding claim 13, Bardagjy, Massoubre , Xu and Imagawa teach the electronic device of Claim 12, wherein: Remaining limitations of claim 13 is similar scope to claim 2 and therefore rejected under the same rational. Regarding claim 14, Bardagjy, Massoubre , Xu and Imagawa teach the electronic device of Claim 13, wherein: Remaining limitations of claim 14 is similar scope to claim 3 and therefore rejected under the same rational. Regarding claim 15, Bardagjy, Massoubre , Xu and Imagawa teach the electronic device of Claim 12, Remaining limitations of claim 15 is similar scope to claim 4 and therefore rejected under the same rational. Regarding claim 16, Bardagjy, Massoubre , Xu and Imagawa teach the electronic device of Claim 15, Remaining limitations of claim 16 is similar scope to claim 5 and therefore rejected under the same rational. Regarding claim 17, Bardagjy, Massoubre , Xu and Imagawa teach the electronic device of Claim 12, Remaining limitations of claim 17 is similar scope to claim 6 and therefore rejected under the same rational. Regarding claim 19, Bardagjy, Massoubre , Xu and Imagawa teach the electronic device of Claim 12, Remaining limitations of claim 19 is similar scope to claim 9 and therefore rejected under the same rational. Regarding independent claim 20, Bardagjy teaches a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors of an electronic device (col.12, lines 59-67, col.13, lines 1-13 “Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Embodiments of the disclosure may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.”), cause the electronic device to: Remaining limitations of claim 20 is similar scope to claim 1 and therefore rejected under the same rational. 2. Claims 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Bardagjy et al, U.S Patent No.10,248842 (“Bardagjy”) in view of Massoubre et al, U.S Patent No. 10,840418 (“Massoubre”) further in view of Xu, U.S Patent Application Publication No.2019/0065845 (“Xu”) further in view of Imagawa et al, U.S Patent Application Publication No.20020051578 (“Imagawa”) further in view of TRYTHALL, U.S Patent Application Publication No.2018/0301076 (“TRYTHALL”) Regarding claim 7, Bardagjy, Massoubre , Xu and Imagawa teach the method of Claim 1, further comprising: in lieu of displaying the sequence of frames to the user, projecting, by a plurality of light-sources of the electronic device, a sequence of optical pulses onto the user, wherein the sequence of optical pulses comprises a first optical pulse including only the first color component, a second optical pulse including only the second color component, and a third optical pulse frame including only the third color component (see at least col.5, lines 54-67-col.6,lines 1-20 of Bardagjy “In some embodiments, the data capture module 330 transmits instructions to the electronic display 200 to sequentially illuminate the face with illumination light having monochromatic light of different colors, e.g., red, green, blue, etc. The sequence may be based on a predetermined order or pattern of repeating multiple colors. To avoid distracting the user, the electronic display 200 may cycle through a sequence of colors at a higher frame rate that is not perceivable by the human eye or difficult to perceive by the human eye. The higher frame rate may be greater than a lower frame rate at which the electronic display 200 displays other content to the user. In some embodiments, the electronic display 200 may emit illumination light having monochromatic light and/or structured light patterns at a different light intensity than that of other content displayed to the user. The electronic display 200 may emit illumination light in a different frame (e.g., period of time) than when other light is emitted for displaying content to the user. For example, the electronic display 200 displays content during a content frame and emits the illumination light during a projection frame. The projection frame may be between content frames, and different frames may vary in duration of time. In an embodiment, the data capture module 330 sends instructions for the electronic display 200 to emit the illumination light embedded with other light for displaying content. In particular, the illumination light may be embedded in a video that is presented before, during, or after certain content of an application of the HMD 100, e.g., during an initial period while the application is loading. Additionally, the illumination light may be embedded in a video such that the illumination light is periodically emitted at a given time interval, e.g., for updating facial data. In some embodiments, the illumination light is emitted for a period of time that is too short for a human eye to perceive.”; col.8, lines 53-67-col.9,lines 1-3 “The facial tracking system 300 instructs 510 a display element (e.g., pixels of the electronic display 200 of FIG. 2 or electronic display 400 of FIG. 4) to display content to a user and to illuminate a portion of a face of the user. The display element may be part of a HMD (e.g., HMD 100 of FIG. 1) worn by the user, where the portion of the face is inside the HMD. In some embodiments, responsive to one or more instructions from the facial tracking system 300, the display element illuminates the face with monochromatic light (and/or structured light) between different content frames. For example, the display element displays the content to the user for a content frame having a first time period. The display element emits monochromatic light for a second time period after the first time period has elapsed, and prior to display of additional content for a subsequent content frame. In other embodiments, the display element illuminates the face with monochromatic light simultaneously with displaying the content for a content frame (e.g., embedded into an image or video).”; see col.12, lines 48-61 of Massoubre “The first decoupling element 214A may redirect internally reflected image light from the waveguide 220. The second de-coupling element 214B may decouple the image light from waveguide 220 and direct the image light towards eye 110. In some embodiments, the internally-reflected image light may be totally, or at least near totally, internally reflected. The first decoupling element 214A may be part of, affixed to, or formed in the top surface 216 of the waveguide 220. The second decoupling element 214B may be part of, affixed to, or formed in the bottom surface 218 of the waveguide 220, such that the first decoupling element 214A is opposed to the second decoupling element 214B. A light propagation area may extend between decoupling elements 214A-214B.”;see at least col.20, lines 26-39, col.20,lines 57-67-col.21, lines 1-23 of Massoubre “Each of the emitter arrays 254 may be a monochromatic emitter array having a 1D or 2D configuration of individual emitters (e.g., LEDs) of a single color. As described herein, a green colored light may be understood as light composed of photons with a range of wavelengths between about 500 nanometers (nm) to about 555 nm. Furthermore, as described herein, red colored light may be understood as light composed of photons with a range of wavelengths between about 622 nm to about 780 nm. Blue colored light may be understood as light composed of photons with a range of wavelengths between about 440 nm to about 492 nm. A monochromatic emitter array 254 may emit light within a narrow wavelength range, rather than a single wavelength, in some embodiments. For example, a monochromatic emitter array 254 may emit colored light (e.g., red, green, or blue photons) within a narrow wavelength range of 5-10 nm in width.”; [0209] of Imagawa “ In the above-described examples, a visible light image and a far-infrared light image which are captured substantially at the same time are combined to conduct learning and recognition processes. The number of combined images is not limited to two. A color image may be used as a visible light image instead of a luminance image. In this case, when a color image is represented by red (R), green (G), and blue (B) images (representing intensities in three different wavelength bands emitted or reflected from a target object), four images, i.e., the three R, G, and B images and one far-infrared light image, are input to the object recognition apparatus 1 as an image set (an image set to be learned and an image set to be recognized). When four images are input, the learning and recognition processes are similar to those when two images, i.e., a visible light image and a far-infrared light image are input.) In addition, the same motivation is used as the rejection for claim 1. Bardagjy, Massoubre , Xu and Imagawa teach wavelengths but not pulses. In the same field of endeavor, TRYTHALL teaches projecting, by a plurality of light-sources of the electronic device, a sequence of optical pulses onto the user ([0007] In an example embodiment, the two or more different colours comprise three colours to be displayed sequentially and the selected reference colour is the second colour to be displayed of the three colours.”; [0038] FIG. 1a provides a representation of a known method for generating a colour pixel using a sequence of red (R), green (G) and blue (B) light pulses of an appropriate duration and brightness within a given frame period or image refresh period; 45] Known head or helmet-mounted display (HMD) systems include a display device, under the control of an image processor, and a transparent combiner in the form of a helmet visor or a waveguide positioned in front of one or other eye of a user in the user's line of sight to an external scene. Images generated by the display device may be projected onto the interior surface of the visor and reflected towards the viewer's eye or conveyed through the waveguide and output along that line of sight to appear overlain on the user's view of the external scene. The display device may comprise a digital micro-mirror device (DMD) or Liquid Crystal on Silicon (LCoS) display device, for example, having pixel-sized elements each separately controllable to reflect, emit or transmit light from one or more illuminating light sources, according to the type of display device. Light is output from the display device at each pixel position in the form of discrete pulses of light which, in a colour display, comprise one or more illuminating colours, e.g. primary colours red (R), green (G) and blue (B). The eye integrates the discrete light pulses output at each pixel position over an image refresh period—16.667 ms in the case of an example 60 Hz display refresh rate—and perceives a pixel of a brightness and colour determined by the total duration of pulses of each illuminating colour during the image refresh period.”), wherein the sequence of optical pulses comprises a first optical pulse including only the first color component, a second optical pulse including only the second color component, and a third optical pulse frame including only the third color component (see at least [0047] Referring initially to FIG. 1a, there is shown a representation of the method for generating a colour pixels using a sequence of red (R), green (G) and blue (B) light pulses of an appropriate duration and brightness within a given image refresh period, in this example for generating a white Pixel 2 in a group of three adjacent pixels (1, 2 and 3) within the image area of a display. It is assumed in this representation that there is no eye movement relative to the display during the image refresh period and the viewer perceives Pixel 2 to be a white pixel, as intended. The ordering of the illuminating colours may be varied from the R, G, B ordering shown in FIG. 1a without altering the viewer's perception of the pixel colour. [0061-0068] Conveniently, in this example embodiment of the present invention, the tracker system may include a prediction capability so that the rotation matrix [HW] may be synchronised to the time at which the selected reference component, in this example the green light pulse, in a sequence of R, G and B pulses, is expected to be visible at the display. This provides for a more accurate alignment of the symbol with the intended line of sight. However, a lack of synchronisation would not in itself be expected to affect the achievement of a perceived alignment of the colour components of the symbol. More importantly, if the rate data are synchronised to the expected time of displaying at least one of the colour components, for example the selected reference component, then it may be expected that the colour components will appear more accurately aligned in the display, in particular where the rate of relative movement happens to be changing rapidly at that time….068] Δt is the required time difference in seconds of the correction, in practice the time difference between a reference time point to which the tracker system is synchronised, e.g. the time of displaying the green component, as above, and the time of displaying pulses of each of the other components) Therefore, it would have been obvious to one of ordinary kill in the art before the effective filling date of the claimed invention to modify the method of updating the facial model with the color representation of the portion of the face using the captured images of Bardagjy, Massoubre , Xu and Imagawa with including optical pulse for each color components as seen in TRYTHALL because this modification would generate a colour pixels using a sequence of red (R), green (G) and blue (B) light pulses of an appropriate duration and brightness within a given image refresh period ([0047 of TRYTHALL) Thus, the combination of Bardagjy, Massoubre , Xu, Imagawa and TRYTHALL teaches further comprising: in lieu of displaying the sequence of frames to the user, projecting, by a plurality of light-sources of the electronic device, a sequence of optical pulses onto the user, wherein the sequence of optical pulses comprises a first optical pulse including only the first color component, a second optical pulse including only the second color component, and a third optical pulse frame including only the third color component. Regarding claim 18, Bardagjy, Massoubre , Xu and Imagawa teach the electronic device of Claim 12, wherein the instructions further comprise instructions to: Remaining limitations of claim 18 is similar scope to claim 7 and therefore rejected under the same rational. 3. Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Bardagjy et al, U.S Patent No.10,248842 (“Bardagjy”) in view of Massoubre et al, U.S Patent No. 10,840418 (“Massoubre”) further in view of Xu, U.S Patent Application Publication No.2019/0065845 (“Xu”) further in view of Imagawa et al, U.S Patent Application Publication No.20020051578 (“Imagawa”) further in view of OGAWA et al, U.S Patent Application Publication No.2021/0068678 (“OGAWA”) Regarding claim 10, Bardagjy, Massoubre , Xu and Imagawa teach the method of Claim 1, further comprising determining a heart rate of the user based on the visible wavelength for each of the first color component, the second color component, and the third color component (see at least col.6, lines 42-67-col.7, lines 1-19 “The data processing module 350 determines facial data by processing images from the data capture module 330. The data processing module 350 may process an image using corresponding attributes of light (emitted at least in part by the electronic display 200) that illuminated a face of a user captured in the image. Facial data may include, e.g., color representations, facial depth data, or other types of information describing faces of users. In some embodiments, the data processing module 350 determines a color representation of the face by aggregating captured images of a user's face illuminated by monochromatic light having different colors, e.g., where each image corresponds to a different one of the colors. The color representation may describe a skin tone color of the user, and the color representation may vary between different portions of the face to provide a more accurate representation of the face of the user in real life. In some embodiments, the data processing module 350 determines facial depth data of the face of the user by using captured images of the face illuminated by structured light or monochromatic light having different colors. The data processing module 350 may generate a depth map of the face by processing the images with a known pattern of the structured light, or a known pattern of the monochromatic light. In an example use case where the structured light pattern includes parallel lines in 2D, the structured light emitted by the electronic display 200 becomes distorted on the face because the face has 3D features (e.g., the noise protrudes from the surface of the face). The camera assemblies 310 may capture these distortions from multiple angles (e.g., the left and right sides of the HMD 100 as shown in FIG. 2). Thus, the data processing module 350 may use triangulation or other mapping techniques to determine distances (e.g., depths) between the camera assemblies 310 and particular points on the face in a 3D coordinate system. By aggregating the distances, the data processing module 350 generates the depth map that describes the user's facial features, e.g., a contour of the user's noise, mouth, eyes, cheek, edge of the face, etc., in 3D space. The resolution of the depth map may be based on the resolution of the corresponding structured light pattern emitted by the electronic display 200. In some embodiments, the data processing module 350 aggregates images captured over a duration of time to generate a depth map that is a spatiotemporal model. The spatiotemporal model describes changes in the captured facial features, e.g., indicating facial expressions.”); [0209] of Imagawa “ In the above-described examples, a visible light image and a far-infrared light image which are captured substantially at the same time are combined to conduct learning and recognition processes. The number of combined images is not limited to two. A color image may be used as a visible light image instead of a luminance image. In this case, when a color image is represented by red (R), green (G), and blue (B) images (representing intensities in three different wavelength bands emitted or reflected from a target object), four images, i.e., the three R, G, and B images and one far-infrared light image, are input to the object recognition apparatus 1 as an image set (an image set to be learned and an image set to be recognized). When four images are input, the learning and recognition processes are similar to those when two images, i.e., a visible light image and a far-infrared light image are input.) In addition, the same motivation is used as the rejection for claim 1. Bardagjy, Massoubre , Xu and Imagawa are understood to be silent on a heart rate. In the same field of endeavor, OGAWA teaches determining a heart rate of the user based on the visible wavelength for each of the first color component, the second color component, and the third color component ([0030] First, the pulse wave calculation section 123 acquires a signal of a change in time of the pixel average of luminance values of each of colors (R, G, and B when captured by an RGB camera) of each region. The pulse wave calculation section 123 performs independent component analysis on the acquired signal, and extracts the same number of independent components as the number of colors. The pulse wave calculation section 123 uses a digital band-pass filter from 0.75 to 4.0 Hz, for example, for these independent components to remove both a low frequency component and a high frequency component from the signal. The pulse wave calculation section 123 performs the fast Fourier transform on the signal having passed through the band-pass filter, and calculates a power spectrum of the frequency. The pulse wave calculation section 123 calculates a peak (Pulse Rate (PR)) of the power spectrum at from 0.75 to 4.0 Hz, and detects the independent component having the highest peak value as a pulse wave signal by comparing with peak values of the independent components.; [0031] As described above, the segmented region pulse wave acquisition unit 120 functions as a pulse wave acquisition unit configured to acquire pulse waves from each of the plurality of regions where the pulse waves can be detected on the body surface of the subject. In the present embodiment, the segmented region pulse wave acquisition unit 120 acquires pulse waves from a captured image of the subject, with reference to a change in time of the pixel average of the luminance values of each color in each of the regions on the subject's body surface. The segmented region pulse wave acquisition unit 120 is not limited to the above-discussed operations, and may acquire the pulse waves by using contact sensors being mounted in contact with each of the plurality of regions where the pulse waves can be detected on the subject's body surface. [0034] FIG. 3 is a power spectrum of the frequency of a pulse wave signal. Since the pulse wave is a wave that is transmitted to an artery by the pumping action of the heart, the pulse wave signal has a fixed period in accordance with the heartbeat, and the peak can be seen at about 1 Hz when frequency analysis is performed on the pulse wave signal. Using this, as illustrated in FIG. 3, the pulse wave signal quality detecting section 131 calculates the SNR in which a power sum of PR±0.05 Hz in the frequency power spectrum of the pulse wave signal is taken as a signal, and a power sum from 0.75 to 4.0 Hz excluding the signal band as noise. Note that the bandwidths of the signal and noise are not limited thereto. The method for detecting the pulse wave signal quality is a known method (for reference: DistancePPG: Robust non-contact vital signs monitoring using a camera), and other methods may be used as appropriate.”) Therefore, it would have been obvious to one of ordinary kill in the art before the effective filling date of the claimed invention to modify the method of updating the facial model with the color representation of the portion of the face using the captured images of Bardagjy, Massoubre , Xu and Imagawa with acquiring pulse waves from a captured image of the subject, with reference to a change in time of the pixel average of the luminance values of each color in each of the regions on the subject's body surface as seen in OGAWA because this modification would calculate blood pressure ([0006] of OGAWA) Thus, the combination of Bardagjy, Massoubre , Xu, Imagawa and OGAWA teaches further comprising determining a heart rate of the user based on the visible wavelength for each of the first color component, the second color component, and the third color component. Regarding claim 11, Bardagjy, Massoubre , Xu and Imagawa teach the method of Claim 1, further comprising determining a blood oxygen level of the user based on the visible wavelength for each of the first color component, the second color component, and the third color component (see at least col.6, lines 42-67-col.7, lines 1-19 “The data processing module 350 determines facial data by processing images from the data capture module 330. The data processing module 350 may process an image using corresponding attributes of light (emitted at least in part by the electronic display 200) that illuminated a face of a user captured in the image. Facial data may include, e.g., color representations, facial depth data, or other types of information describing faces of users. In some embodiments, the data processing module 350 determines a color representation of the face by aggregating captured images of a user's face illuminated by monochromatic light having different colors, e.g., where each image corresponds to a different one of the colors. The color representation may describe a skin tone color of the user, and the color representation may vary between different portions of the face to provide a more accurate representation of the face of the user in real life. In some embodiments, the data processing module 350 determines facial depth data of the face of the user by using captured images of the face illuminated by structured light or monochromatic light having different colors. The data processing module 350 may generate a depth map of the face by processing the images with a known pattern of the structured light, or a known pattern of the monochromatic light. In an example use case where the structured light pattern includes parallel lines in 2D, the structured light emitted by the electronic display 200 becomes distorted on the face because the face has 3D features (e.g., the noise protrudes from the surface of the face). The camera assemblies 310 may capture these distortions from multiple angles (e.g., the left and right sides of the HMD 100 as shown in FIG. 2). Thus, the data processing module 350 may use triangulation or other mapping techniques to determine distances (e.g., depths) between the camera assemblies 310 and particular points on the face in a 3D coordinate system. By aggregating the distances, the data processing module 350 generates the depth map that describes the user's facial features, e.g., a contour of the user's noise, mouth, eyes, cheek, edge of the face, etc., in 3D space. The resolution of the depth map may be based on the resolution of the corresponding structured light pattern emitted by the electronic display 200. In some embodiments, the data processing module 350 aggregates images captured over a duration of time to generate a depth map that is a spatiotemporal model. The spatiotemporal model describes changes in the captured facial features, e.g., indicating facial expressions.” [0209] of Imagawa “ In the above-described examples, a visible light image and a far-infrared light image which are captured substantially at the same time are combined to conduct learning and recognition processes. The number of combined images is not limited to two. A color image may be used as a visible light image instead of a luminance image. In this case, when a color image is represented by red (R), green (G), and blue (B) images (representing intensities in three different wavelength bands emitted or reflected from a target object), four images, i.e., the three R, G, and B images and one far-infrared light image, are input to the object recognition apparatus 1 as an image set (an image set to be learned and an image set to be recognized). When four images are input, the learning and recognition processes are similar to those when two images, i.e., a visible light image and a far-infrared light image are input.) In addition, the same motivation is used as the rejection for claim 1. Bardagjy, Massoubre , Xu and Imagawa are understood to be silent on a blood oxygen level. In the same field of endeavor, OGAWA teaches determining a blood oxygen level of the user based on the visible wavelength for each of the first color component, the second color component, and the third color component (see at least [0004] In order to estimate blood pressure using an image captured by a single camera, it is desirable to measure the blood pressure by calculating a pulse wave propagation time only from an image of the face. 030] First, the pulse wave calculation section 123 acquires a signal of a change in time of the pixel average of luminance values of each of colors (R, G, and B when captured by an RGB camera) of each region. The pulse wave calculation section 123 performs independent component analysis on the acquired signal, and extracts the same number of independent components as the number of colors. The pulse wave calculation section 123 uses a digital band-pass filter from 0.75 to 4.0 Hz, for example, for these independent components to remove both a low frequency component and a high frequency component from the signal. The pulse wave calculation section 123 performs the fast Fourier transform on the signal having passed through the band-pass filter, and calculates a power spectrum of the frequency. The pulse wave calculation section 123 calculates a peak (Pulse Rate (PR)) of the power spectrum at from 0.75 to 4.0 Hz, and detects the independent component having the highest peak value as a pulse wave signal by comparing with peak values of the independent components.; [0031] As described above, the segmented region pulse wave acquisition unit 120 functions as a pulse wave acquisition unit configured to acquire pulse waves from each of the plurality of regions where the pulse waves can be detected on the body surface of the subject. In the present embodiment, the segmented region pulse wave acquisition unit 120 acquires pulse waves from a captured image of the subject, with reference to a change in time of the pixel average of the luminance values of each color in each of the regions on the subject's body surface. The segmented region pulse wave acquisition unit 120 is not limited to the above-discussed operations, and may acquire the pulse waves by using contact sensors being mounted in contact with each of the plurality of regions where the pulse waves can be detected on the subject's body surface”; [0047] The imaging unit 110 is a camera in which an image sensor of CMOS, CCD, or the like is combined with a lens, and includes a color filter suitable for observing an increase and decrease in the amount of blood such as a color filter of Bayer arrangement of RGB, which is generally used, a color filter of RGBCy in which a color filter of Cy (Cyan) is added to a color filter of RGB, a color filter of RGBIR in which a color filter of IR (near infrared) is added to a color filter of RGB, or the like. The color filter is not limited to the filter described above, and only an IR color filter may be included, for example. [0044] The blood pressure estimating section 142 calculates blood pressure information, which is estimated blood pressure of the subject, by using Equation (4) discussed above with reference to the pulse wave propagation information having been calculated by the pulse wave propagation information calculation section 141. The blood pressure estimating section 142 selects a blood pressure estimation formula corresponding to the measurement subject biometric parameter input via the measurement subject biometric parameter input unit 150. The blood pressure estimating section 142 uses the selected blood pressure estimation formula to calculate estimated blood pressure based on the pulse wave propagation time calculated in the pulse wave propagation information calculation section 141. It is sufficient that the blood pressure estimating section 142 is configured to use the pulse wave propagation time obtained above as at least one of explanatory variables in estimating the blood pressure in which the blood pressure is taken as an objective variable, and in addition, an amount of characteristics or the like obtained from a pulse wave form having been obtained from a screen image of a face may also be used in combination therewith.”) In addition, the same motivation is used as the rejection for claim 10. Thus, the combination of Bardagjy, Massoubre , Xu, Imagawa and OGAWA teaches further comprising determining a blood oxygen level of the user based on the visible wavelength for each of the first color component, the second color component, and the third color component. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARAH LE whose telephone number is (571)270-7842. The examiner can normally be reached Monday: 8AM-4:30PM EST, Tuesday: 8 AM-3:30PM EST, Wednesday: 8AM-2:30PM EST, Thursday and Friday off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SARAH LE/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Aug 30, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §103, §112
Aug 13, 2025
Interview Requested
Aug 20, 2025
Response Filed
Aug 20, 2025
Examiner Interview Summary
Aug 20, 2025
Applicant Interview (Telephonic)
Oct 03, 2025
Final Rejection — §103, §112
Dec 03, 2025
Applicant Interview (Telephonic)
Dec 04, 2025
Examiner Interview Summary
Jan 07, 2026
Request for Continued Examination
Jan 09, 2026
Response after Non-Final Action
Jan 13, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569321
PROPOSING DENTAL RESTORATION MATERIAL PARAMETERS
2y 5m to grant Granted Mar 10, 2026
Patent 12573128
Progressive Compression of Geometry for Graphics Processing
2y 5m to grant Granted Mar 10, 2026
Patent 12536715
GENERATION OF STYLIZED DRAWING OF THREE-DIMENSIONAL SHAPES USING NEURAL NETWORKS
2y 5m to grant Granted Jan 27, 2026
Patent 12505585
SYSTEMS AND METHODS FOR OVERLAY OF VIRTUAL OBJECT ON PROXY OBJECT
2y 5m to grant Granted Dec 23, 2025
Patent 12505590
NODE LIGHTING
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+33.4%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 258 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month