Prosecution Insights
Last updated: April 18, 2026
Application No. 19/081,730

DYNAMIC SENSOR SELECTION FOR VISUAL INERTIAL ODOMETRY SYSTEMS

Final Rejection §103§DP
Filed
Mar 17, 2025
Examiner
KARIMI, PEGEMAN
Art Unit
2623
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
694 granted / 839 resolved
+20.7% vs TC avg
Moderate +15% lift
Without
With
+14.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
13 currently pending
Career history
852
Total Applications
across all art units

Statute-Specific Performance

§101
0.9%
-39.1% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
25.7%
-14.3% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 839 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 1 is objected to because of the following informalities: the word “using the” is repeated twice in the limitation of “… eyewear device using the using the subset sensors.” in the last line of claim 1. Claims 2-9 are objected to because they depend upon claim 1, Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-7 and 10-17 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7 and 9-15 of U.S. Patent No. 11,789,266. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claim is broader in every aspect than the patent claim and is therefore an obvious variant thereof. Current application 19/081,730 Prior Patent Application 11,789,266 1. A method for visual-inertial tracking with an eyewear device, the method comprising: monitoring a plurality of sensors of a visual inertial odometry system (VIOS), wherein each of the plurality of sensors provide input for determining a position of the eyewear device; 1. A method for visual-inertial tracking with an eyewear device, the method comprising: monitoring a plurality of sensors of a visual inertial odometry system (VIOS), wherein each of the plurality of sensors provide input for determining a position of the eyewear device within an environment; determining a status of the VIOS; determining a status of the VIOS; adjusting the plurality of sensors based on the determined status, wherein the adjusting comprises selecting a subset of the plurality of sensors and powering off the remaining sensors; and adjusting the plurality of sensors based on the determined status, wherein the adjusting comprises selecting a subset of the plurality of sensors and powering off the remaining sensors; and determining the position of the eyewear device using the using the subset of sensors. determining the position of the eyewear device within the environment using the using the subset of sensors. 10. An eyewear device with visual-inertial tracking, the eyewear device comprising: a visual inertial odometry system (VIOS) including a plurality of sensors, wherein the plurality of sensors include an inertial measurement unit (IMU) and a first camera, wherein each of the plurality of sensors provide input for determining a position of the eyewear device; 9. An eyewear device with visual-inertial tracking, the eyewear device comprising: a visual inertial odometry system (VIOS) including a plurality of sensors, wherein the plurality of sensors include an inertial measurement unit (IMU) and a first camera, wherein each of the plurality of sensors provide input for determining a position of the eyewear device within an environment; a processor configured to determine a status of the VIOS, adjust the plurality of sensors based on the determined status, and determine the position of the eyewear device using the adjusted plurality of sensors; and a frame supporting the VIOS and the processor, a processor configured to determine a status of the VIOS, adjust the plurality of sensors based on the determined status, and determine the position of the eyewear device within the environment using the adjusted plurality of sensors; and a frame supporting the VIOS and the processor, the frame configured to be worn on the head of a user, wherein the VIOS is configured to capture images with the first camera, and the processor is configured to identify a physical environment of the eyewear device and determine the status of the VIOS based on the identified physical environment. the frame configured to be worn on the head of a user, wherein the VIOS is configured to capture images with the first camera, and the processor is configured to identify a physical environment of the eyewear device and determine the status of the VIOS based on the identified physical environment. 16. A non-transitory computer-readable medium storing program code for visual-inertial tracking when executed by an eyewear device having a plurality of sensors, a processor, and a memory, the program code, when executed, is operative to cause an electronic processor to perform the steps of: 14. A non-transitory computer-readable medium storing program code for visual-inertial tracking when executed by an eyewear device having a plurality of sensors, a processor, and a memory, the program code, when executed, is operative to cause an electronic processor to perform the steps of: monitoring a plurality of sensors of a visual inertial odometry system (VIOS), wherein each of the plurality of sensors provide input for determining a position of the eyewear device; monitoring a plurality of sensors of a visual inertial odometry system (VIOS), wherein each of the plurality of sensors provide input for determining a position of the eyewear device within an environment; determining a status of the VIOS; determining a status of the visual-inertial tracking system; adjusting the plurality of sensors based on the determined status, wherein the adjusting comprises selecting a subset of the plurality of sensors and placing the remaining sensors in a lower power mode, adjusting the plurality of sensors based on the determined status, wherein the adjusting comprises selecting a subset of the plurality of sensors and placing the remaining sensors in a lower power mode, wherein the lower power mode includes one or more of reducing frame rate, resolution, quality, or a combination thereof; wherein the lower power mode includes one or more of reducing frame rate, resolution, quality, or a combination thereof; wherein the determining the position of the eyewear device comprises determining the position of the eyewear device using the subset of sensors; and determining the position of the eyewear device using the adjusted plurality of sensors. wherein the determining the position of the eyewear device comprises determining the position of the eyewear device within the environment using the subset of sensors; and determining the position of the eyewear device within the environment using the adjusted plurality of sensors. 8. The method of claim 7, wherein the VIOS status configuration options include at least a low power level and wherein the adjusting includes: 8. The method of claim 7, wherein the VIOS status configuration options include at least a low power level and a high power level and wherein the adjusting includes: placing the VIOS in the low power level when the motion parameter value and the uncertainty parameter value are low. placing the VIOS in the low power level when the motion parameter value and the uncertainty parameter value are low; 9. The method of claim 7, wherein the VIOS status configuration options include at least a high power level and wherein the adjusting includes: 8. The method of claim 7, wherein the VIOS status configuration options include at least a low power level and a high power level and wherein the adjusting includes: placing the VIOS in the high power level when the motion parameter values and the uncertainty parameter value are high. placing the VIOS in the high power level when the motion parameter values and the uncertainty parameter value are high. 14. The device of claim 13, wherein the VIOS status configuration options include at least a low power level and wherein the adjusting includes: 13. The device of claim 12, wherein the VIOS status configuration options include at least a low power level and a high-power level and wherein the adjusting includes: placing the VIOS in the low power level when the motion parameter value and the uncertainty parameter value are low. placing the VIOS in the low power level when the motion parameter value and the uncertainty parameter value are low; 15. The device of claim 13, wherein the VIOS status configuration options include at least a high power level and wherein the adjusting includes: 13. The device of claim 12, wherein the VIOS status configuration options include at least a low power level and a high power level and wherein the adjusting includes: placing the VIOS in the high power level when the motion parameter values and the uncertainty parameter value are high. placing the VIOS in the high power level when the motion parameter values and the uncertainty parameter value are high. Claim 2 of the current application is the same as claim 2 of the U.S. Patent No. 11,789,266. Claim 3 of the current application is the same as claim 3 of the U.S. Patent No. 11,789,266. Claim 4 of the current application is the same as claim 4 of the U.S. Patent No. 11,789,266. Claim 5 of the current application is the same as claim 5 of the U.S. Patent No. 11,789,266. Claim 6 of the current application is the same as claim 6 of the U.S. Patent No. 11,789,266. Claim 7 of the current application is the same as claim 7 of the U.S. Patent No. 11,789,266. Claim 11 of the current application is the same as claim 10 of the U.S. Patent No. 11,789,266. Claim 12 of the current application is broader than claim 11 of the U.S. Patent No. 11,789,266 and claim 11 covers all of the limitations of claim 12 of the current application. Claim 13 of the current application is the same as claim 12 of the U.S. Patent No. 11,789,266. Claim 17 of the current application is the same as claim 15 of the U.S. Patent No. 11,789,266, except that claim 15 additionally mentions “within the environment”. Claims 1, 3-5, and 10-15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 7-10, 13, 14, and 16 of U.S. Patent No. 12,265,222. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claim is broader in every aspect than the patent claim and is therefore an obvious variant thereof. Current application 19/081,730 U.S. Patent No. 12,265,222 1. A method for visual-inertial tracking with an eyewear device, the method comprising: monitoring a plurality of sensors of a visual inertial odometry system (VIOS), wherein each of the plurality of sensors provide input for determining a position of the eyewear device; 1. A method for visual-inertial tracking with an eyewear device, the method comprising: monitoring a plurality of sensors of a visual inertial odometry system (VIOS), wherein each of the plurality of sensors provide input for determining a position of the eyewear device within an environment; determining a status of the VIOS; determining a status of the VIOS; adjusting the plurality of sensors based on the determined status, Claim 2: adjusting the plurality of sensors based on the determined status. wherein the adjusting comprises selecting a subset of the plurality of sensors and powering off the remaining sensors; and determining the position of the eyewear device using the using the subset of sensors. Claim 3: the adjusting further comprises: selecting a subset of the plurality of sensors and powering off the remaining sensors, determining the position of the eyewear device within the environment using the subset of sensors. 10. An eyewear device with visual-inertial tracking, the eyewear device comprising: a visual inertial odometry system (VIOS) including a plurality of sensors, wherein the plurality of sensors include an inertial measurement unit (IMU) and a first camera, wherein each of the plurality of sensors provide input for determining a position of the eyewear device; 10. An eyewear device with visual-inertial tracking, the eyewear device comprising: a visual inertial odometry system (VIOS) including a plurality of sensors, wherein the plurality of sensors include an inertial measurement unit (IMU) and a first camera, wherein each of the plurality of sensors provide input for determining a position of the eyewear device a processor configured to determine a status of the VIOS, adjust the plurality of sensors based on the determined status, and determine the position of the eyewear device using the adjusted plurality of sensors; and a frame supporting the VIOS and the processor, a processor configured to determine a status of the VIOS, adjust the plurality of sensors based on the determined status, and determine the position of the eyewear device within the environment using the adjusted plurality of sensors; and a frame supporting the VIOS and the processor, the frame configured to be worn on the head of a user, wherein the VIOS is configured to capture images with the first camera, and the processor is configured to identify a physical environment of the eyewear device and determine the status of the VIOS based on the identified physical environment. the frame configured to be worn on the head of a user, wherein the VIOS is configured to capture images with the first camera, and the processor is configured to identify a physical environment of the eyewear device and determine the status of the VIOS based on the identified physical environment. 14. The device of claim 13, wherein the VIOS status configuration options include at least a low power level and wherein the adjusting includes: 16. The device of claim 15, wherein the VIOS status configuration options include at least a low power level and a high power level and wherein the adjusting includes: placing the VIOS in the low power level when the motion parameter value and the uncertainty parameter value are low. placing the VIOS in the low power level when the motion parameter value and the uncertainty parameter value are low; 15. The device of claim 13, wherein the VIOS status configuration options include at least a high power level and wherein the adjusting includes: 16. The device of claim 15, wherein the VIOS status configuration options include at least a low power level and a high power level and wherein the adjusting includes: placing the VIOS in the high power level when the motion parameter values and the uncertainty parameter value are high. placing the VIOS in the high power level when the motion parameter values and the uncertainty parameter value are high. Claim 3 of the current application is the same as claim 7 of the U.S. Patent No. 12,265,222. Claim 4 of the current application is the same as claim 8 of the U.S. Patent No. 12,265,222. Claim 5 of the current application is the same as claim 9 of the U.S. Patent No. 12,265,222. Claim 11 of the current application is the same as claim 13 of the U.S. Patent No. 12,265,222. Claim 12 of the current application is broader than claim 14 of the U.S. Patent No. 12,265,222 and claim 14 covers all of the limitations of claim 12 of the current application. Claim 13 of the current application is the same as claim 14 of the U.S. Patent No. 12,265,222. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 4, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li (U.S. Pub. No. 2017/0336439) in view of Min (U.S. Pub. No. 2006/0044265), and further in view of Berkovich (U.S. Pub. No. 2021/0133452). As to claim 1, Li teaches a method (method of Fig. 5) for visual-inertial tracking with an eyewear device (the VIO is accurately estimating the motion of the device and any fault that causes the device not to accurately estimate the motion causes subdetectors to independently detect different failure conditions, [0017], lines 1-7), the method comprising: monitoring (checking the feature tracks) a plurality of sensors (214 and 216) of a visual inertial odometry system (VIOS) ([0017], lines 1-4 and a component 430 for visual sensor check subdetector and inertial measurement unit sensor 440, [0034], lines 1-5), wherein each of the plurality of sensors (214 and 216) provide input for determining a position of the eyewear device (position in the environment 212; [0024], lines 1-6 and [0025], lines 1-5); determining a status of the VIOS ([0034], lines 8-13); Li does not mention adjusting the plurality of sensors, Min teaches adjusting the plurality of sensors based on the determined status (adjusting the sensing distance of detection signals based on the determined status of the menu key 426, [0051], lines 6-9), and determining the position of the eyewear device using the using the subset of sensors (sensor 222 having multiple sensors 232 and 234, Fig. 2, [0042], lines 4-12 and [0059], lines 6-12 and the position of the user’s hand is determined for the eyewear device within the environment as can be seen in Fig. 7, [0068], lines 1-11). Therefore it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the adjusting the sensor and determining the position of the eyewear device of Min to the visual inertial tracking of Li because to provide an HMD information terminal combining an HMF with an information terminal so that a user’s motion can be sensed and recognized as key input without any separate input device, [0014], lines 3-7. Prior art references of Li and Min do not teach selecting a subset of the plurality of sensors and powering off the remaining sensors, Berkovich teaches wherein the adjusting (adjusting the volume of image data transmitted over physical link, [0105], lines 1-2) comprises selecting a subset of the plurality of sensors and powering off the remaining sensors (powering off is interpreted as disabled or not allowed to transmit sensor data, some of the sensors are not selected and some are selected to transmit sensor data, [0105], lines 8-13). Therefore, it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the selecting some of the sensors and disabling some of the sensors of Berkovich to the device of Li as modified by Min because the controller 812 can select the subset of sensors to transmit image data at a higher resolution and/or at a higher frame rate, whereas the sensors that are not selected can transmit image data at a lower resolution and/or at a lower frame rate, [0105], lines 14-18. As to claim 3, Li teaches the plurality of sensors include an inertial measurement unit (IMU) (IMU sensor data subdetectors, [0034], lines 8-9) and a first camera ([0027], lines 9-11). As to claim 4, Li teaches the first camera (214) is a first visible light camera (the sensor is an imaging sensor, as can be seen in Fig. 2, the sensor works in an environment receiving visible light in order to capture a wide angle imaging, [0023], lines 3-10), and wherein the plurality of sensors further includes one or more of a second visible light camera, a first depth camera, a second depth camera, another IMU, a radar system, or a GPS (the second sensor 216 is a narrow angle imaging sensor in the local environment 212, and therefore receives a second visible light for narrow angle imaging, [0023], lines 10-19 and [0024], lines 1-3 for cameras 214 and 216). As to claim 16, Li teaches a non-transitory computer-readable medium (1000/1004) storing program code (1040) for visual-inertial tracking when executed by an eyewear device (100) (the VIO is accurately estimating the motion of the device and any fault that causes the device not to accurately estimate the motion causes subdetectors to independently detect different failure conditions, [0017], lines 1-7) having a plurality of sensors (214 and 216), a processor (1004, Fig. 10), and a memory (RAM, [0045], lines 10-15), the program code ([0045], lines 15-19), when executed, is operative to cause an electronic processor to perform the steps of: monitoring (checking the feature tracks) a plurality of sensors (214 and 216) of a visual inertial odometry system (VIOS) ([0017], lines 1-4 and a component 430 for visual sensor check subdetector and inertial measurement unit sensor 440, [0034], lines 1-5), wherein each of the plurality of sensors (214 and 216) provide input for determining a position of the eyewear device (position in the environment 212; [0024], lines 1-6 and [0025], lines 1-5); determining a status of the VIOS ([0034], lines 8-13); Li does not mention adjusting the plurality of sensors, Min teaches adjusting the plurality of sensors based on the determined status (adjusting the sensing distance of detection signals based on the determined status of the menu key 426, [0051], lines 6-9), and determining the position of the eyewear device using the adjusted plurality of sensors (the position of the user’s hand is determined for eyewear device within the environment as can be seen in Fig. 7, [0068], lines 1-11). Therefore it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the adjusting the sensor and determining the position of the eyewear device of Min to the visual inertial tracking of Li because to provide an HMD information terminal combining an HMF with an information terminal so that a user’s motion can be sensed and recognized as key input without any separate input device, [0014], lines 3-7. wherein the determining the position of the eyewear device comprises determining the position of the eyewear device using the subset of sensors (sensor 222 having multiple sensors 232 and 234, Fig. 2, [0042], lines 4-12 and [0059], lines 6-12 and the position of the user’s hand is determined for the eyewear device within the environment as can be seen in Fig. 7, [0068], lines 1-11); Prior art references of Li and Min do not teach selecting a subset of the plurality of sensors and powering off the remaining sensors, Berkovich teaches the adjusting (adjusting the volume of image data transmitted over physical link, [0105], lines 1-2) comprises selecting a subset of the plurality of sensors and placing the remaining sensors in a lower power mode (Lower power mode is interpreted as disabled or not allowed to transmit sensor data, some of the sensors are not selected and some are selected to transmit sensor data, [0105], lines 8-13), wherein the lower power mode includes one or more of reducing frame rate, resolution, quality, or a combination thereof (higher or lower frame rate, [0105], lines 13-18); Therefore, it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the selecting some of the sensors and disabling some of the sensors of Berkovich to the device of Li as modified by Min because the controller 812 can select the subset of sensors to transmit image data at a higher resolution and/or at a higher frame rate, whereas the sensors that are not selected can transmit image data at a lower resolution and/or at a lower frame rate, [0105], lines 14-18. Claim(s) 10 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li (U.S. Pub. No. 2017/0336439) in view of Min (U.S. Pub. No. 2006/0044265). As to claim 10, Li teaches an eyewear device (100) with visual-inertial tracking (the VIO is accurately estimating the motion of the device and any fault that causes the device not to accurately estimate the motion causes subdetectors to independently detect different failure conditions, [0017], lines 1-7), the eyewear device (100) comprising: a visual inertial odometry system (VIOS) ([0017], lines 1-4 and a component 430 for visual sensor check subdetector and inertial measurement unit sensor 440, [0034], lines 1-5) including a plurality of sensors (214 and 216), wherein the plurality of sensors (214 and 216) include an inertial measurement unit (IMU) (IMU sensor data subdetectors, [0034], lines 8-9) and a first camera ([0027], lines 9-11), wherein each of the plurality of sensors (214 and 216) provide input for determining a position of the eyewear device (position within the environment 212; [0024], lines 1-6 and [0025], lines 1-5); a processor (1000/400) configured to determine a status of the VIOS ([0034], lines 8-13), and a frame supporting the VIOS and the processor (the frame of the HMD 100 protects the processor and other circuits within the frame, Fig. 1A), the frame configured to be worn on the head of a user (the frame has two straps 118 and the HMD 100 is meant to immerse the user in whatever image is being displayed on the device, [0019], Li further teaches wherein the VIOS ([0017], lines 1-4 and a component 430 for visual sensor check subdetector and inertial measurement unit sensor 440, [0034], lines 1-5) is configured to capture images with the first camera (214/216, [0024], lines 1-6), and the processor is configured to identify a physical environment of the eyewear device ([0044], lines 6-11) and determine the status of the VIOS based on the identified physical environment (the display controller 1006, which is a part of the processor controls the display device to display the modified imagery at the display device, [0044], lines 1-6, wherein as can be seen in Fig. 2, the images captured by the elements 234, 242, and 238 determine an image of the environment 212 on the display, [0027], lines 9-14). Li does not mention adjusting the plurality of sensors, Min teaches adjust the plurality of sensors based on the determined status (adjusting the sensing distance of detection signals based on the determined status of the menu key 426, [0051], lines 6-9), and determine the position of the eyewear device using the adjusted plurality of sensors (the position of the user’s hand is determined for eyewear device within the environment as can be seen in Fig. 7, [0068], lines 1-11); Therefore it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the adjusting the sensor and determining the position of the eyewear device of Min to the visual inertial tracking of Li because to provide an HMD information terminal combining an HMF with an information terminal so that a user’s motion can be sensed and recognized as key input without any separate input device, [0014], lines 3-7. As to claim 11, Li teaches the first camera (214) is a first visible light camera (the sensor is an imaging sensor, as can be seen in Fig. 2, the sensor works in an environment receiving visible light in order to capture a wise angle imaging, [0023], lines 3-10), and wherein the plurality of sensors further includes one or more of a second visible light camera, a first depth camera, a second depth camera, another IMU, a radar system, or a GPS (the second sensor 216 is a narrow-angle imaging sensor in the local environment 212, and therefore receives a second visible light for narrow angle imaging, [0023], lines 10-19). Allowable Subject Matter Claims 2, 5-9, 12-15, 17-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcome obvious type double patenting. Claim 2 is objected to because the prior art references do not teach selecting a sampling rate for one of the plurality of sensors based on the determined status; and sampling the one of the plurality of sensors at the selected sampling rate, wherein the determining the position of the eyewear device comprises determining the position of the eyewear device using the one of the plurality of sensors at the selected sampling rate. Claim 5 is objected to because the prior art references do not teach identifying a physical environment of the eyewear device; comparing the identified physical environment to a prior physical environment; identifying new information in the identified physical environment; and determining the status of the VIOS based on the new information in the identified physical environment. Claim 7 is objected to because the prior art references do not teach determining at least one of a motion parameter value or an uncertainty parameter value of the eyewear device; and mapping the determined at least one of the motion parameter value or the uncertainty parameter value to one of a plurality of VIOS status configuration options. Claim 12 is objected to because the prior art references do not teach compare the identified physical environment to a prior physical environment; identify new information in the identified physical environment; and determine the status of the VIOS based on the new information in the identified physical environment. Claim 13 is objected to because the prior art references do not teach determining at least one of a motion parameter value or an uncertainty parameter value of the eyewear device; and mapping the determined at least one of the motion parameter value or the uncertainty parameter value to one of a plurality of VIOS status configuration options. Claim 17 is objected to because the prior art references do not teach selecting a sampling rate for one of the subset of sensors based on the determined status; and sampling the one of the plurality of sensors at the selected sampling rate, wherein the determining the position of the eyewear device comprises determining the position of the eyewear device using the one of the subset of sensors at the selected sampling rate. Claim 18 is objected to because the prior art references do not teach capturing images with the first camera; identifying a physical environment of the eyewear device; comparing the identified physical environment to a prior physical environment; identifying new information in the identified physical environment; and determining the status of the visual-inertial tracking system based on the new information in the identified physical environment. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhang (U.S. Pub. No. 2019/0037210) teaches a naked eye three-dimensional display device. Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEGEMAN KARIMI whose telephone number is (571)270-1712. The examiner can normally be reached Monday-Friday; 9:00am-4:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached at 5712727772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PEGEMAN KARIMI/Primary Examiner, Art Unit 2623
Read full office action

Prosecution Timeline

Mar 17, 2025
Application Filed
Mar 02, 2026
Non-Final Rejection — §103, §DP
Mar 12, 2026
Response Filed
Apr 09, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598881
Display Panel and Display Apparatus
2y 5m to grant Granted Apr 07, 2026
Patent 12597314
METHOD FOR A GAMING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12594495
A Computer System and Computer Implemented Method for Gaming in A Virtualisation Environment
2y 5m to grant Granted Apr 07, 2026
Patent 12586495
DISPLAY DEVICE INCLUDING CURRENT DETECTION DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12581824
DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
97%
With Interview (+14.6%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 839 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month