Prosecution Insights
Last updated: April 19, 2026
Application No. 18/751,403

HEAD-MOUNTED DEVICE AND EYE TRACKING METHOD FOR TRACKING EYEBALLS

Non-Final OA §103§DP
Filed
Jun 24, 2024
Examiner
ROSARIO, NELSON M
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Asti Global Inc. Taiwan
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
704 granted / 818 resolved
+24.1% vs TC avg
Moderate +6% lift
Without
With
+5.8%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
27 currently pending
Career history
845
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
70.9%
+30.9% vs TC avg
§102
2.3%
-37.7% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 818 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is responsive to the application filed June 24, 2024. Claims 1-10 are presented for examination. Claims 1, 4 and 8 are independent claims. Priority Examiner acknowledges the claims for domestic priority under 35 U.S. C. 119 (e) to provisional patent application 63526449, which was filed July 12, 2023. Drawings The drawings filed June 24, 2024 are accepted by the examiner. Oath/Declaration The Office acknowledges receipt of a properly signed Oath/Declaration submitted June 24, 2024. Abstract The abstract filed June 24, 2024 is accepted by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428,46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046,29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Omum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CPR 3.73(b). Claims 1-10 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-10 of application No. 18963775. Although the conflicting claims are not identical, they are not patentably distinct from each other because the claims recites A head-mounted device for tracking eyeballs, comprising: a device casing module including a first eyeglass frame structure, a second eyeglass frame structure cooperating with the first eyeglass frame structure, a first eyeglass lens structure carried by the first eyeglass frame structure, and a second eyeglass lens structure carried by the second eyeglass frame structure; a signal control module disposed in the device casing module; a first image capturing module configured to cooperate with the device casing module and electrically connect to the signal control module; and a second image capturing module configured to cooperate with the device casing module and electrically connect to the signal control module; wherein the first image capturing module includes a plurality of first image sensors disposed around the first eyeglass frame structure, and the second image capturing module includes a plurality of second image sensors disposed around the second eyeglass frame structure; wherein, when the head-mounted device is optionally configured to be worn by a user, the first image sensors are allowed to be configured through the signal control module to capture a first eyeball image of a first eye of the user through a first optical waveguide channel provided by the first eyeglass lens structure; wherein, when the head-mounted device is optionally configured to be worn by the user, the second image sensors are allowed to be configured through the signal control module to capture a second eyeball image of a second eye of the user through a second optical waveguide channel provided by the second eyeglass lens structure, therefore the same limitations as claimed in application No. 18963775 . This is an obviousness-type double patenting rejection. Application 18751403 Application 18963775 A head-mounted device for tracking eyeballs, comprising: a device casing module including a first eyeglass frame structure, a second eyeglass frame structure cooperating with the first eyeglass frame structure, a first eyeglass lens structure carried by the first eyeglass frame structure, and a second eyeglass lens structure carried by the second eyeglass frame structure; a signal control module disposed in the device casing module; a first image capturing module configured to cooperate with the device casing module and electrically connect to the signal control module; and a second image capturing module configured to cooperate with the device casing module and electrically connect to the signal control module; wherein the first image capturing module includes a plurality of first image sensors disposed around the first eyeglass frame structure, and the second image capturing module includes a plurality of second image sensors disposed around the second eyeglass frame structure; wherein, when the head-mounted device is optionally configured to be worn by a user, the first image sensors are allowed to be configured through the signal control module to capture a first eyeball image of a first eye of the user through a first optical waveguide channel provided by the first eyeglass lens structure; wherein, when the head-mounted device is optionally configured to be worn by the user, the second image sensors are allowed to be configured through the signal control module to capture a second eyeball image of a second eye of the user through a second optical waveguide channel provided by the second eyeglass lens structure. Similar to claims 4 and 8. 1. A multifunctional head-mounted device, comprising: a device casing module including a first eyeglass frame structure, a second eyeglass frame structure cooperating with the first eyeglass frame structure, a first eyeglass lens structure carried by the first eyeglass frame structure, and a second eyeglass lens structure carried by the second eyeglass frame structure; a signal control module disposed in the device casing module; a first image generating module cooperating with the device casing module and electrically connected to the signal control module; and a second image generating module cooperating with the device casing module and electrically connected to the signal control module; wherein the first image generating module includes a plurality of first image generating chips, and the plurality of first image generating chips are configured to surround the first eyeglass lens structure and be surrounded by the first eyeglass frame structure; wherein the second image generating module includes a plurality of second image generating chips, and the plurality of second image generating chips are configured to surround the second eyeglass lens structure and be surrounded by the second eyeglass frame structure; wherein, when the multifunctional head-mounted device is optionally configured to be worn by a user, the plurality of first image generating chips are allowed to be configured through the signal control module to project a first predetermined image beam onto a first eye of the user through a first optical waveguide channel provided by the first eyeglass lens structure; and wherein, when the multifunctional head-mounted device is optionally configured to be worn by the user, the plurality of second image generating chips are allowed to be configured through the signal control module to project a second predetermined image beam onto a second eye of the user through a second optical waveguide channel provided by the second eyeglass lens structure. Similar to claims 6 and 10. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Kubo et al (US 20250028172 A1) in view of Tzvieli et al. (US 20220155860 A1). As to Claim 1: Kubo et al discloses a head-mounted device (Kubo, see Abstract, where Kubo discloses that a head mountable display device can include a frame defining an aperture, an optical assembly disposed in the aperture, and a curtain assembly extending between the frame and the optical assembly and occluding the aperture. The curtain assembly can include an elastic layer and an air-impermeable layer) for tracking eyeballs (Kubo, see paragraph [0842], where Kubo discloses that input-output circuitry 2.2-22 may include sensors 2.2-16. Sensors 2.2-16 may include, for example, three-dimensional sensors (e.g., three-dimensional image sensors such as structured light sensors that emit beams of light and that use two-dimensional digital image sensors to gather image data for three-dimensional images from dots or other light spots that are produced when a target is illuminated by the beams of light, binocular three-dimensional image sensors that gather three-dimensional images using two or more cameras in a binocular imaging arrangement, three-dimensional LIDAR (light detection and ranging) sensors, sometimes referred to as time-of-flight cameras or three-dimensional time-of-flight cameras, three-dimensional radiofrequency sensors, or other sensors that gather three-dimensional image data), cameras (e.g., two-dimensional infrared and/or visible digital image sensors), gaze tracking sensors (e.g., a gaze tracking system based on an image sensor and, if desired, a light source that emits one or more beams of light that are tracked using the image sensor after reflecting from a user's eyes), comprising: a device casing module including a first eyeglass frame (Kubo, see 9.1-121a in figure 9.1-2 and paragraph [1430], where Kubo discloses that as seen in FIGS. 9.1-2, the display 9.1-120 (FIGS. 9.1-1) can include a first display assembly 9.1-12la and a second display assembly 9.1-121b, which collectively form a pair of display assemblies corresponding to the two eyes of a user. Each of the display assemblies may include any appropriate combination of electronic and optical elements to present graphical information to the user) structure (Kubo, see figure 9.1-1 paragraph [1419], where Kubo discloses that the securement element 9.1-150 can include a band, a strap, a rim, temples of a glasses frame, or any other suitable mechanism that serves to secure and retain the housing 9.1-110 on the head 9.1-20 of the user 9.1-10. The securement element 9.1-150 can be an integral part of the housing 9 .1-110 or be implemented as a separate component attached thereto. The housing 9.1-110 can further include or be coupled to one or more nose pads that serve to rest the housing 9.1-110 on the nose of the user 9.1-10), a second eyeglass frame structure (Kubo, see 9.1-240 in figure 9.1-2) cooperating with the first eyeglass frame structure (Kubo, see 9.1-121a and 9.1-12b in figure 9.1-2 and paragraph [1430], where Kubo discloses that as seen in FIGS. 9.1-2, the display 9.1-120 (FIGS. 9.1-1) can include a first display assembly 9.1-12la and a second display assembly 9.1-121b, which collectively form a pair of display assemblies corresponding to the two eyes of a user. Each of the display assemblies may include any appropriate combination of electronic and optical elements to present graphical information to the user, collectively forming a pair of display assemblies between first display assembly 9.1-12la and a second display assembly 9.1-121b teaches or suggest cooperation), a first eyeglass lens structure carried by the first eyeglass frame structure (Kubo, see paragraph [1430], where Kubo discloses that FIGS. 9.1-2 shows an example of the head mounted device 9.1-100 in front view. As seen in FIGS. 9.1-2, the display 9.1-120 (FIGS. 9.1-1) can include a first display assembly 9.1-12la and a second display assembly 9.1-121b, which collectively form a pair of display assemblies corresponding to the two eyes of a user. Each of the display assemblies may include any appropriate combination of electronic and optical elements to present graphical information to the user. For example, each display assembly may include a display layer having an array of electronically controlled pixels that can provide a visual output. The display assembly may further include optical elements, such as lenses, mirrors, etc., and/or a gaze tracking device, to facilitate generation of an enhanced computer generated reality that is responsive to a gaze and/or pose of the user), and a second eyeglass lens structure carried by the second eyeglass frame structure (Kubo, see paragraph [1430], where Kubo discloses that FIGS. 9.1-2 shows an example of the head mounted device 9.1-100 in front view. As seen in FIGS. 9.1-2, the display 9.1-120 (FIGS. 9.1-1) can include a first display assembly 9.1-12la and a second display assembly 9.1-121b, which collectively form a pair of display assemblies corresponding to the two eyes of a user. Each of the display assemblies may include any appropriate combination of electronic and optical elements to present graphical information to the user. For example, each display assembly may include a display layer having an array of electronically controlled pixels that can provide a visual output. The display assembly may further include optical elements, such as lenses, mirrors, etc., and/or a gaze tracking device, to facilitate generation of an enhanced computer generated reality that is responsive to a gaze and/or pose of the user); a signal control module (Kubo, see controller 9.1-130 in figure 9.1-1 and paragraph [1449], where Kubo discloses that as shown in FIGS. 9.1-9, the head-mounted device 9.1-100 can include a controller 9.1-130 with one or more processing units that include or are configured to access a memory 9.1-918 having instructions stored thereon. The instructions or computer programs may be configured to perform one or more of the operations or functions described with respect to the head-mounted device 9.1-100) disposed in the device casing module (Kubo, see controller 9.1-130 in figure 9.1-1 and paragraph [1449]); a first image capturing module configured to cooperate with the device casing module and electrically connect to the signal control module (Kubo, see paragraphs [0852] and [1981], where Kubo discloses a forward-facing cameras for pass-through video may be mounted on the left and right sides of the front of device 2.2-10 in a configuration in which the cameras diverge slightly along the horizontal dimension so that the fields of view of these cameras overlap somewhat while capturing a wide-angle image of the environment in front of device 2.2-10. The captured image may, if desired, include portions of the user's surroundings that are below, above, and to the sides of the area directly in front of device 2.2-10. Kubo also discloses a rear-facing sensors such as sensor 11.1.3.2.3-66 on main housing 11.1.3.2.3-12M, head-facing sensors mounted on strap 11.1.3.2.3-12T such as sensor 11.1.3.2.3-64, and/or other head presence sensors, these sensors may include cameras); and a second image capturing module configured to cooperate with the device casing module and electrically connect to the signal control module (Kubo, see paragraphs [0852] and [1981], where Kubo discloses a forward-facing cameras for pass-through video may be mounted on the left and right sides of the front of device 2.2-10 in a configuration in which the cameras diverge slightly along the horizontal dimension so that the fields of view of these cameras overlap somewhat while capturing a wide-angle image of the environment in front of device 2.2-10. The captured image may, if desired, include portions of the user's surroundings that are below, above, and to the sides of the area directly in front of device 2.2-10. Kubo also discloses a rear-facing sensors such as sensor 11.1.3.2.3-66 on main housing 11.1.3.2.3-12M, head-facing sensors mounted on strap 11.1.3.2.3-12T such as sensor 11.1.3.2.3-64, and/or other head presence sensors, these sensors may include cameras); wherein the first image capturing module includes a plurality of first image sensors disposed around the first eyeglass frame structure (Kubo, see 11.1.3-104a in figure 11.1.3-1 and paragraph [1764], where Kubo discloses that the first and second optical modules 11.1.3-104a-b can include respective display screens configured to project light toward the user's eyes when donning the HMD 11.1.3-100. In at least one example, the user can manipulate (i.e., depress and/or rotate) the button 11.1.3-114 to activate a positional adjustment of the optical modules 11.1.3-104a-b to match the inter-pupillary distance of the user's eyes. The optical modules 11.1.3-104a-b can also include one or more cameras or other sensors/sensor systems for imaging and measuring the IPD of the user such that the optical modules 11.1.3-104a-b can be adjusted to match the IPD), and the second image capturing module includes a plurality of second image sensors disposed around the second eyeglass frame structure (Kubo, see 11.1.3-104b in figure 11.1.3-1 and paragraph [1764], where Kubo discloses that the first and second optical modules 11.1.3-104a-b can include respective display screens configured to project light toward the user's eyes when donning the HMD 11.1.3-100. In at least one example, the user can manipulate (i.e., depress and/or rotate) the button 11.1.3-114 to activate a positional adjustment of the optical modules 11.1.3-104a-b to match the inter-pupillary distance of the user's eyes. The optical modules 11.1.3-104a-b can also include one or more cameras or other sensors/sensor systems for imaging and measuring the IPD of the user such that the optical modules 11.1.3-104a-b can be adjusted to match the IPD); wherein, when the head-mounted device is optionally configured to be worn by a user (Kubo, see paragraph [0809], where Kubo discloses that FIG. 1-2 illustrates a view of an example of an HMD 1-200 including various portions or parts thereof separated according to the modularity and selective coupling of those parts. For example, the HMD 1-200 can include a band 1-216 which can be selectively coupled to first and second electronic straps l-205a, l-205b. The first securement strap l-205a can include a first electronic component 1-212a and the second securement strap 1-205b can include a second electronic component l-212b. In at least one example, the first and second straps l-205a-b can be removably coupled to the display unit 1-202), the first image sensors are allowed to be configured through the signal control module to capture a first eyeball image of a first eye of the user (Kubo, see 13.5-108 in figure 13.5-2A, and paragraph [2591] and paragraph [2592], where Kubo discloses that the head-mountable device 13.5-100 also includes a facial interface 13.5-103 and a sensor 13.5-108 positioned (e.g., attached to or embedded within) on the facial interface 13.5-103. As used herein, the terms "facial interface" or "engagement interface" refer to a portion of the head mountable device 13.5-100 that engages a user face via direct contact. In particular, a facial interface includes portions of the head-mountable device 13.5-100 that conform to (e.g., compress against) regions of a user face. To illustrate, a facial interface can include a pliant (or semi-pliant) facetrack that spans the forehead, wraps around the eyes, contacts the zygoma and maxilla regions of the face, and bridges the nose. In addition, a facial interface can include various components forming a structure, webbing, cover, fabric, or frame of a head-mountable device disposed between the display 13.5-102 and the user skin. In particular implementations, a facial interface can include a seal (e.g., a light seal, environment seal, dust seal, air seal, etc.). It will be appreciated that the term "seal" can include partial seals or inhibitors, in addition to complete seals (e.g., a partial light seal where some ambient light is blocked and a complete light seal where all ambient light is blocked when the head-mountable device is donned). In addition, the term "sensor" refers to one or more different sensing devices, such as a camera or imaging device); wherein, when the head-mounted device is optionally configured to be worn by the user (Kubo, see paragraph [0809], where Kubo discloses that FIG. 1-2 illustrates a view of an example of an HMD 1-200 including various portions or parts thereof separated according to the modularity and selective coupling of those parts. For example, the HMD 1-200 can include a band 1-216 which can be selectively coupled to first and second electronic straps l-205a, l-205b. The first securement strap l-205a can include a first electronic component 1-212a and the second securement strap 1-205b can include a second electronic component l-212b. In at least one example, the first and second straps l-205a-b can be removably coupled to the display unit 1-202), the second image sensors are allowed to be configured through the signal control module to capture a second eyeball image of a second eye of the user (Kubo, see 13.5-108 in figure 13.5-2A, and paragraph [2591] and paragraph [2592], where Kubo discloses that the head-mountable device 13.5-100 also includes a facial interface 13.5-103 and a sensor 13.5-108 positioned (e.g., attached to or embedded within) on the facial interface 13.5-103. As used herein, the terms "facial interface" or "engagement interface" refer to a portion of the head mountable device 13.5-100 that engages a user face via direct contact. In particular, a facial interface includes portions of the head-mountable device 13.5-100 that conform to (e.g., compress against) regions of a user face. To illustrate, a facial interface can include a pliant (or semi-pliant) facetrack that spans the forehead, wraps around the eyes, contacts the zygoma and maxilla regions of the face, and bridges the nose. In addition, a facial interface can include various components forming a structure, webbing, cover, fabric, or frame of a head-mountable device disposed between the display 13.5-102 and the user skin. In particular implementations, a facial interface can include a seal (e.g., a light seal, environment seal, dust seal, air seal, etc.). It will be appreciated that the term "seal" can include partial seals or inhibitors, in addition to complete seals (e.g., a partial light seal where some ambient light is blocked and a complete light seal where all ambient light is blocked when the head-mountable device is donned). In addition, the term "sensor" refers to one or more different sensing devices, such as a camera or imaging device). PNG media_image1.png 1096 1034 media_image1.png Greyscale PNG media_image2.png 671 744 media_image2.png Greyscale PNG media_image3.png 1006 718 media_image3.png Greyscale Kubo differs from the claimed subject matter in that Kubo does not explicitly disclose that the first image sensors are allowed to be configured through the signal control module to capture a first eyeball image of a first eye of the user through a first optical waveguide channel provided by the first eyeglass lens structure and the second image sensors are allowed to be configured through the signal control module to capture a second eyeball image of a second eye of the user through a second optical waveguide channel provided by the second eyeglass lens structure. However in an analogous art, Tzvieli discloses that the first image sensors are allowed to be configured through the signal control module to capture a first eyeball image of a first eye of the user (Tzvieli, see 233a, 232a, 231a, 232b, 231b, 232c and 230 in figure 2B and paragraph [0096], where Tzvieli discloses that FIG. 2B illustrates an embodiment of an eye tracking system on smartglasses that tracks both eyes, which utilizes multiple light sources and detectors to track each eye. The illustrated system includes the smartglasses 230 that have PSOG and VOG that may be used together to track movements of both eyes. Tracking of the left eye is done utilizing a PSOG that includes multiple light sources (emitters 231a and 231b in the figure) as well as multiple detectors (discrete photosensors 232a, 232b, and 232c). Additionally, video camera 233a may be utilized to capture images of the left eye, which can be used to determine positions and/or movements of the left eye) through a first optical waveguide channel provided by the first eyeglass lens structure (Tzvieli, see paragraph [0054], where Tzvieli discloses that some examples of the one or more sensing components that may be utilized by systems that include VOG and/or PSOG (such as the aforementioned HMS) include various types of photo sensors ( e.g., discrete photo sensors or imaging sensors of cameras). In one example, photo sensors may be embedded in a head-mounted frame. In another example, photosensors may be embedded in smartglasses' temples. In yet another example, photosensors may be embedded in a display located in front of an eye). In still another example, photo sensors may be configured to receive the reflected light from a waveguide (e.g., photosensors coupled to an augmented reality display module waveguide located in front of the eyes) and the second image sensors are allowed to be configured through the signal control module to capture a second eyeball image of a second eye of the user (Tzvieli, see paragraph [0096], where Tzvieli discloses that FIG. 2B illustrates an embodiment of an eye tracking system on smartglasses that tracks both eyes, which utilizes multiple light sources and detectors to track each eye. The illustrated system includes the smartglasses 230 that have PSOG and VOG that may be used together to track movements of both eyes. Tracking of the left eye is done utilizing a PSOG that includes multiple light sources (emitters 231a and 231b in the figure) as well as multiple detectors (discrete photosensors 232a, 232b, and 232c). Additionally, video camera 233a may be utilized to capture images of the left eye, which can be used to determine positions and/or movements of the left eye. In a similar fashion, tracking the right eye is done in this embodiment utilizing another PSOG that includes additional light sources (emitters 231c and 231d in the figure) as well as additional multiple detectors (discrete photosensors 232d, 232e, and 232.1) and an additional video camera 233b that may be utilized to capture images of the right eye) through a second optical waveguide channel provided by the second eyeglass lens structure (Tzvieli, see paragraph [0054], where Tzvieli discloses that some examples of the one or more sensing components that may be utilized by systems that include VOG and/or PSOG (such as the aforementioned HMS) include various types of photo sensors ( e.g., discrete photo sensors or imaging sensors of cameras). In one example, photo sensors may be embedded in a head-mounted frame. In another example, photosensors may be embedded in smartglasses' temples. In yet another example, photosensors may be embedded in a display located in front of an eye). In still another example, photo sensors may be configured to receive the reflected light from a waveguide (e.g., photosensors coupled to an augmented reality display module waveguide located in front of the eyes). PNG media_image4.png 900 732 media_image4.png Greyscale It would have been obvious to one of ordinary skill in the art to modify the invention of Kubo with Tzvieli. One would be motivated to modify Kubo by disclosing that the first image sensors are allowed to be configured through the signal control module to capture a first eyeball image of a first eye of the user through a first optical waveguide channel provided by the first eyeglass lens structure and the second image sensors are allowed to be configured through the signal control module to capture a second eyeball image of a second eye of the user through a second optical waveguide channel provided by the second eyeglass lens structure as taught by Tzvieli, and thereby providing power efficient ways to obtain eye tracking data with head-mounted systems (Tzvieli, see paragraph [0004]). As to Claim 4: Kubo et al discloses a head-mounted device (Kubo, see Abstract, where Kubo discloses that a head mountable display device can include a frame defining an aperture, an optical assembly disposed in the aperture, and a curtain assembly extending between the frame and the optical assembly and occluding the aperture. The curtain assembly can include an elastic layer and an air-impermeable layer) for tracking eyeballs (Kubo, see paragraph [0842], where Kubo discloses that input-output circuitry 2.2-22 may include sensors 2.2-16. Sensors 2.2-16 may include, for example, three-dimensional sensors (e.g., three-dimensional image sensors such as structured light sensors that emit beams of light and that use two-dimensional digital image sensors to gather image data for three-dimensional images from dots or other light spots that are produced when a target is illuminated by the beams of light, binocular three-dimensional image sensors that gather three-dimensional images using two or more cameras in a binocular imaging arrangement, three-dimensional LIDAR (light detection and ranging) sensors, sometimes referred to as time-of-flight cameras or three-dimensional time-of-flight cameras, three-dimensional radiofrequency sensors, or other sensors that gather three-dimensional image data), cameras (e.g., two-dimensional infrared and/or visible digital image sensors), gaze tracking sensors (e.g., a gaze tracking system based on an image sensor and, if desired, a light source that emits one or more beams of light that are tracked using the image sensor after reflecting from a user's eyes), comprising: a device casing module including a first eyeglass frame (Kubo, see 9.1-121a in figure 9.1-2 and paragraph [1430], where Kubo discloses that as seen in FIGS. 9.1-2, the display 9.1-120 (FIGS. 9.1-1) can include a first display assembly 9.1-12la and a second display assembly 9.1-121b, which collectively form a pair of display assemblies corresponding to the two eyes of a user. Each of the display assemblies may include any appropriate combination of electronic and optical elements to present graphical information to the user) structure (Kubo, see figure 9.1-1 paragraph [1419], where Kubo discloses that the securement element 9.1-150 can include a band, a strap, a rim, temples of a glasses frame, or any other suitable mechanism that serves to secure and retain the housing 9.1-110 on the head 9.1-20 of the user 9.1-10. The securement element 9.1-150 can be an integral part of the housing 9 .1-110 or be implemented as a separate component attached thereto. The housing 9.1-110 can further include or be coupled to one or more nose pads that serve to rest the housing 9.1-110 on the nose of the user 9.1-10), a second eyeglass frame structure (Kubo, see 9.1-240 in figure 9.1-2) cooperating with the first eyeglass frame structure (Kubo, see 9.1-121a and 9.1-12b in figure 9.1-2 and paragraph [1430], where Kubo discloses that as seen in FIGS. 9.1-2, the display 9.1-120 (FIGS. 9.1-1) can include a first display assembly 9.1-12la and a second display assembly 9.1-121b, which collectively form a pair of display assemblies corresponding to the two eyes of a user. Each of the display assemblies may include any appropriate combination of electronic and optical elements to present graphical information to the user, collectively forming a pair of display assemblies between first display assembly 9.1-12la and a second display assembly 9.1-121b teaches or suggest cooperation), a first eyeglass lens structure carried by the first eyeglass frame structure (Kubo, see paragraph [1430], where Kubo discloses that FIGS. 9.1-2 shows an example of the head mounted device 9.1-100 in front view. As seen in FIGS. 9.1-2, the display 9.1-120 (FIGS. 9.1-1) can include a first display assembly 9.1-12la and a second display assembly 9.1-121b, which collectively form a pair of display assemblies corresponding to the two eyes of a user. Each of the display assemblies may include any appropriate combination of electronic and optical elements to present graphical information to the user. For example, each display assembly may include a display layer having an array of electronically controlled pixels that can provide a visual output. The display assembly may further include optical elements, such as lenses, mirrors, etc., and/or a gaze tracking device, to facilitate generation of an enhanced computer generated reality that is responsive to a gaze and/or pose of the user), and a second eyeglass lens structure carried by the second eyeglass frame structure (Kubo, see paragraph [1430], where Kubo discloses that FIGS. 9.1-2 shows an example of the head mounted device 9.1-100 in front view. As seen in FIGS. 9.1-2, the display 9.1-120 (FIGS. 9.1-1) can include a first display assembly 9.1-12la and a second display assembly 9.1-121b, which collectively form a pair of display assemblies corresponding to the two eyes of a user. Each of the display assemblies may include any appropriate combination of electronic and optical elements to present graphical information to the user. For example, each display assembly may include a display layer having an array of electronically controlled pixels that can provide a visual output. The display assembly may further include optical elements, such as lenses, mirrors, etc., and/or a gaze tracking device, to facilitate generation of an enhanced computer generated reality that is responsive to a gaze and/or pose of the user); a signal control module (Kubo, see controller 9.1-130 in figure 9.1-1 and paragraph [1449], where Kubo discloses that as shown in FIGS. 9.1-9, the head-mounted device 9.1-100 can include a controller 9.1-130 with one or more processing units that include or are configured to access a memory 9.1-918 having instructions stored thereon. The instructions or computer programs may be configured to perform one or more of the operations or functions described with respect to the head-mounted device 9.1-100) disposed in the device casing module (Kubo, see controller 9.1-130 in figure 9.1-1 and paragraph [1449]); a first image capturing module configured to cooperate with the device casing module and electrically connect to the signal control module (Kubo, see paragraphs [0852] and [1981], where Kubo discloses a forward-facing cameras for pass-through video may be mounted on the left and right sides of the front of device 2.2-10 in a configuration in which the cameras diverge slightly along the horizontal dimension so that the fields of view of these cameras overlap somewhat while capturing a wide-angle image of the environment in front of device 2.2-10. The captured image may, if desired, include portions of the user's surroundings that are below, above, and to the sides of the area directly in front of device 2.2-10. Kubo also discloses a rear-facing sensors such as sensor 11.1.3.2.3-66 on main housing 11.1.3.2.3-12M, head-facing sensors mounted on strap 11.1.3.2.3-12T such as sensor 11.1.3.2.3-64, and/or other head presence sensors, these sensors may include cameras); and a second image capturing module configured to cooperate with the device casing module and electrically connect to the signal control module (Kubo, see paragraphs [0852] and [1981], where Kubo discloses a forward-facing cameras for pass-through video may be mounted on the left and right sides of the front of device 2.2-10 in a configuration in which the cameras diverge slightly along the horizontal dimension so that the fields of view of these cameras overlap somewhat while capturing a wide-angle image of the environment in front of device 2.2-10. The captured image may, if desired, include portions of the user's surroundings that are below, above, and to the sides of the area directly in front of device 2.2-10. Kubo also discloses a rear-facing sensors such as sensor 11.1.3.2.3-66 on main housing 11.1.3.2.3-12M, head-facing sensors mounted on strap 11.1.3.2.3-12T such as sensor 11.1.3.2.3-64, and/or other head presence sensors, these sensors may include cameras); wherein the first image capturing module includes a plurality of first image sensors disposed around the first eyeglass frame structure (Kubo, see 11.1.3-104a in figure 11.1.3-1 and paragraph [1764], where Kubo discloses that the first and second optical modules 11.1.3-104a-b can include respective display screens configured to project light toward the user's eyes when donning the HMD 11.1.3-100. In at least one example, the user can manipulate (i.e., depress and/or rotate) the button 11.1.3-114 to activate a positional adjustment of the optical modules 11.1.3-104a-b to match the inter-pupillary distance of the user's eyes. The optical modules 11.1.3-104a-b can also include one or more cameras or other sensors/sensor systems for imaging and measuring the IPD of the user such that the optical modules 11.1.3-104a-b can be adjusted to match the IPD), and the second image capturing module includes a plurality of second image sensors disposed around the second eyeglass frame structure (Kubo, see 11.1.3-104b in figure 11.1.3-1 and paragraph [1764], where Kubo discloses that the first and second optical modules 11.1.3-104a-b can include respective display screens configured to project light toward the user's eyes when donning the HMD 11.1.3-100. In at least one example, the user can manipulate (i.e., depress and/or rotate) the button 11.1.3-114 to activate a positional adjustment of the optical modules 11.1.3-104a-b to match the inter-pupillary distance of the user's eyes. The optical modules 11.1.3-104a-b can also include one or more cameras or other sensors/sensor systems for imaging and measuring the IPD of the user such that the optical modules 11.1.3-104a-b can be adjusted to match the IPD). PNG media_image1.png 1096 1034 media_image1.png Greyscale PNG media_image2.png 671 744 media_image2.png Greyscale PNG media_image3.png 1006 718 media_image3.png Greyscale Kubo differs from the claimed subject matter in that Kubo does not explicitly disclose wherein the first image capturing module includes a plurality of first image sensors disposed on the first eyeglass frame structure, and the second image capturing module includes a plurality of second image sensors disposed on the second eyeglass frame structure. However in an analogous art, Tzvieli discloses that wherein the first image capturing module includes a plurality of first image sensors disposed on the first eyeglass frame structure (Tzvieli, see 233a, 232a, 231a, 232b, 231b, 232c and 230 in figure 2B and paragraph [0096], where Tzvieli discloses that FIG. 2B illustrates an embodiment of an eye tracking system on smartglasses that tracks both eyes, which utilizes multiple light sources and detectors to track each eye. The illustrated system includes the smartglasses 230 that have PSOG and VOG that may be used together to track movements of both eyes. Tracking of the left eye is done utilizing a PSOG that includes multiple light sources (emitters 231a and 231b in the figure) as well as multiple detectors (discrete photosensors 232a, 232b, and 232c). Additionally, video camera 233a may be utilized to capture images of the left eye, which can be used to determine positions and/or movements of the left eye), and the second image capturing module includes a plurality of second image sensors disposed on the second eyeglass frame structure Tzvieli, see paragraph [0096], where Tzvieli discloses that FIG. 2B illustrates an embodiment of an eye tracking system on smartglasses that tracks both eyes, which utilizes multiple light sources and detectors to track each eye. The illustrated system includes the smartglasses 230 that have PSOG and VOG that may be used together to track movements of both eyes. Tracking of the left eye is done utilizing a PSOG that includes multiple light sources (emitters 231a and 231b in the figure) as well as multiple detectors (discrete photosensors 232a, 232b, and 232c). Additionally, video camera 233a may be utilized to capture images of the left eye, which can be used to determine positions and/or movements of the left eye. In a similar fashion, tracking the right eye is done in this embodiment utilizing another PSOG that includes additional light sources (emitters 231c and 231d in the figure) as well as additional multiple detectors (discrete photosensors 232d, 232e, and 232.1) and an additional video camera 233b that may be utilized to capture images of the right eye). PNG media_image5.png 890 770 media_image5.png Greyscale It would have been obvious to one of ordinary skill in the art to modify the invention of Kubo with Tzvieli. One would be motivated to modify Kubo by disclosing wherein the first image capturing module includes a plurality of first image sensors disposed on the first eyeglass frame structure, and the second image capturing module includes a plurality of second image sensors disposed on the second eyeglass frame structure as taught by Tzvieli, and thereby providing power efficient ways to obtain eye tracking data with head-mounted systems (Tzvieli, see paragraph [0004]). As to Claim 7: Kubo in view of Tzvieli discloses the head-mounted device for tracking eyeballs according to claim 4, wherein, when the head-mounted device is optionally configured to be worn by a user, the first image sensors are allowed to be configured to capture a first eyeball image of a first eye of the user (Tzvieli, see 233a, 232a, 231a, 232b, 231b, 232c and 230 in figure 2B and paragraph [0096], where Tzvieli discloses that FIG. 2B illustrates an embodiment of an eye tracking system on smartglasses that tracks both eyes, which utilizes multiple light sources and detectors to track each eye. The illustrated system includes the smartglasses 230 that have PSOG and VOG that may be used together to track movements of both eyes. Tracking of the left eye is done utilizing a PSOG that includes multiple light sources (emitters 231a and 231b in the figure) as well as multiple detectors (discrete photosensors 232a, 232b, and 232c). Additionally, video camera 233a may be utilized to capture images of the left eye, which can be used to determine positions and/or movements of the left eye), and the second image sensors are allowed to be configured to capture a second eyeball image of a second eye of the user (Tzvieli, see paragraph [0096], where Tzvieli discloses that FIG. 2B illustrates an embodiment of an eye tracking system on smartglasses that tracks both eyes, which utilizes multiple light sources and detectors to track each eye. The illustrated system includes the smartglasses 230 that have PSOG and VOG that may be used together to track movements of both eyes. Tracking of the left eye is done utilizing a PSOG that includes multiple light sources (emitters 231a and 231b in the figure) as well as multiple detectors (discrete photosensors 232a, 232b, and 232c). Additionally, video camera 233a may be utilized to capture images of the left eye, which can be used to determine positions and/or movements of the left eye. In a similar fashion, tracking the right eye is done in this embodiment utilizing another PSOG that includes additional light sources (emitters 231c and 231d in the figure) as well as additional multiple detectors (discrete photosensors 232d, 232e, and 232.1) and an additional video camera 233b that may be utilized to capture images of the right eye); wherein the first image sensors of the first image capturing module are at least divided into a first middle image sensor (Tzvieli, see figure 2A claim mapping below), a first left image sensor and a first right image sensor (Tzvieli, see figure 2A claim mapping below), the first middle image sensor is configured to capture a middle area image of the first eye (Tzvieli, see figure 2A claim mapping below), the first left image sensor is configured to capture a left area image of the first eye (Tzvieli, see figure 2A claim mapping below), and the first right image sensor is configured to capture a right area image of the first eye (Tzvieli, see figure 2A claim mapping below); wherein the second image sensors of the second image capturing module are at least divided into a second middle image sensor (Tzvieli, see figure 2A claim mapping below), a second left image sensor and a second right image sensor (Tzvieli, see figure 2A claim mapping below), the second middle image sensor is configured to capture a middle area image of the second eye (Tzvieli, see figure 2A claim mapping below), the second left image sensor is configured to capture a left area image of the second eye (Tzvieli, see figure 2A claim mapping below), and the second right image sensor is configured to capture a right area image of the second eye (Tzvieli, see figure 2A claim mapping below); wherein, the signal control module is configured to process the middle area image captured by the first middle image sensor (Tzvieli, see figure 2A claim mapping below), the left area image captured by the first left image sensor and the right area image captured by the first right image sensor (Tzvieli, see figure 2A claim mapping below), thereby obtaining the first eyeball image of the first eye of the user (Tzvieli, see figure 2A claim mapping below); wherein, the signal control module is configured to process the middle area image captured by the second middle image sensor (Tzvieli, see figure 2A claim mapping below), the left area image captured by the second left image sensor and the right area image captured by the second right image sensor (Tzvieli, see figure 2A claim mapping below), thereby obtaining the second eyeball image of the second eye of the user (Tzvieli, see paragraph [0096], where Tzvieli discloses that FIG. 2B illustrates an embodiment of an eye tracking system on smartglasses that tracks both eyes, which utilizes multiple light sources and detectors to track each eye. The illustrated system includes the smartglasses 230 that have PSOG and VOG that may be used together to track movements of both eyes. Tracking of the left eye is done utilizing a PSOG that includes multiple light sources (emitters 231a and 231b in the figure) as well as multiple detectors (discrete photosensors 232a, 232b, and 232c). Additionally, video camera 233a may be utilized to capture images of the left eye, which can be used to determine positions and/or movements of the left eye. In a similar fashion, tracking the right eye is done in this embodiment utilizing another PSOG that includes additional light sources (emitters 231c and 231d in the figure) as well as additional multiple detectors (discrete photosensors 232d, 232e, and 232.1) and an additional video camera 233b that may be utilized to capture images of the right eye). PNG media_image6.png 569 1002 media_image6.png Greyscale As to Claim 8: Kubo et al discloses an eye tracking method (Kubo, see paragraph [0842], where Kubo discloses that input-output circuitry 2.2-22 may include sensors 2.2-16. Sensors 2.2-16 may include, for example, three-dimensional sensors (e.g., three-dimensional image sensors such as structured light sensors that emit beams of light and that use two-dimensional digital image sensors to gather image data for three-dimensional images from dots or other light spots that are produced when a target is illuminated by the beams of light, binocular three-dimensional image sensors that gather three-dimensional images using two or more cameras in a binocular imaging arrangement, three-dimensional LIDAR (light detection and ranging) sensors, sometimes referred to as time-of-flight cameras or three-dimensional time-of-flight cameras, three-dimensional radiofrequency sensors, or other sensors that gather three-dimensional image data), cameras (e.g., two-dimensional infrared and/or visible digital image sensors), gaze tracking sensors (e.g., a gaze tracking system based on an image sensor and, if desired, a light source that emits one or more beams of light that are tracked using the image sensor after reflecting from a user's eyes), comprising: providing a head
Read full office action

Prosecution Timeline

Jun 24, 2024
Application Filed
Oct 27, 2025
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599503
Goggle lens
2y 5m to grant Granted Apr 14, 2026
Patent 12601932
COLOR-CHANGING EYEGLASS
2y 5m to grant Granted Apr 14, 2026
Patent 12602123
ELECTRONIC PEN
2y 5m to grant Granted Apr 14, 2026
Patent 12601912
AUGMENTED REALITY GAMING USING VIRTUAL EYEWEAR BEAMS
2y 5m to grant Granted Apr 14, 2026
Patent 12593977
Vision Screening Device Including Color Imaging
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
92%
With Interview (+5.8%)
2y 0m
Median Time to Grant
Low
PTA Risk
Based on 818 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month