Prosecution Insights
Last updated: April 19, 2026
Application No. 18/455,664

SIMULTANEOUS EYE-TRACKING CALIBRATION AND VISUAL ACUITY CHECK

Final Rejection §103
Filed
Aug 25, 2023
Examiner
PICHLER, MARIN
Art Unit
2872
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Pixieray OY
OA Round
2 (Final)
63%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
72%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
411 granted / 650 resolved
-4.8% vs TC avg
Moderate +9% lift
Without
With
+8.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
61 currently pending
Career history
711
Total Applications
across all art units

Statute-Specific Performance

§101
0.2%
-39.8% vs TC avg
§103
41.1%
+1.1% vs TC avg
§102
26.9%
-13.1% vs TC avg
§112
25.0%
-15.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 650 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Amendment The amendment filed on 01/26/2026 has been entered. Claims 1-16 remain pending in the application. Claims 1 and 12 have been amended by the Applicant. Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Drawings The applicant’s drawings submitted are acceptable for examination purposes. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-16 are rejected under 35 U.S.C. 103 as being unpatentable over Shin et al. (hereafter Shin, of record) US 20230176401 A1in view of Raviv et al. (hereafter Raviv, of record) US 20220296093 A1. In regard to independent claims 1 and 12, Shin teaches (see Figs. 1-9) an optical apparatus and applying mutatis mutandis to a method implemented by the optical apparatus (i.e. device e.g. AR 1000 and method e.g. 10,20 for correcting the vision of a user and performing calibration, abstract, paragraphs [01, 05-12, 29,32-49, 51-55, 57-72, 80-93,101-106, 113-121,128-142, 144-153], e.g. Figs. 1-3, 5-9) comprising: eye-tracking means (gaze tracking sensor 1500, paragraphs [38-41,57-62,68-72], Figs. 2-3); and at least one processor (processor 1800 performing the method, paragraphs [48, 67-87], Fig. 2, 8-9) configured to (method comprising): (a) display(displaying) a given image on a screen (display vision measurement chart with characters objects on screen display 1300, e.g. waveguide/combiner screen 1320, paragraphs [41-44, 48, 113-121,128-142], Figs. 1A-3, 5-8) the given image representing at least one optotype (i.e. at least one character of the vision test chart, paragraphs [29,41-44,54, 113-121,128-142]); (b) obtain (obtaining) a given user input (1000,1800 obtains input of user, Figs. 1A,8-9) selected from one of: as and when the user is able to clearly view a given optotype represented in the given image, or that the user is unable to clearly view the given optotype, while using the optical apparatus (i.e. ..receive at least one answer input by the user viewing the virtual image clearly, as character on image appear less or only moderately blurry to the user, and thus user may correctly identifying the image as depicted in Figs. 7 and 6, and obtain/ receive at least one answer input by the user viewing the virtual image not clearly as the displayed character on image appear significantly blurry to the user, and thus incorrectly identifying the image as depicted in Fig. 5, paragraphs [40-44, 113-121,128-142], Figs. 1A-2, 5-8); (c) determine (determining), based on the given user input (user input), whether the user is able to clearly view the given optotype represented in the given image (i.e. 100,1800 determining, user inputs a correct or an incorrect answer, paragraphs [40-42;75-80, 113-121,128-142]); (d) when it is determined that the user is able to clearly view the given optotype (when .. user input correct answer, e.g. paragraphs [40-44;75-80, 113-121]), collect information (store in memory 1700 of 1000 device, paragraphs [44,64]) indicative of at least one of: a size of the given optotype that the user is able to clearly view, a position of the given optotype and/or optotype features in the given image, an appearance of features of the user's eyes when the user is able to clearly view the given optotype that is displayed at said position (i.e. as 1000 for correct answer(s) stores information for display position of each character, e.g. type “0”, “B”, it's depth and size corresponding to visual acuity, and eye gaze direction(s) ,displayed in chart, paragraphs [40-44, 70-72,118,121,122]); (e) when it is determined that the user is not able to clearly view the given optotype (incorrect input by user for displayed character, e.g. paragraphs [40-44;75-80, 113-121,124]), collect information (store in memory 1700 of 1000 device, paragraphs [44,64]) indicative of at least one of: a size of the given optotype that the user is not able to clearly view, a position of the given optotype and/or optotype features in the given image, an appearance of the features of the user's eyes when the user is not able to clearly view the given optotype that is displayed at said position (i.e. as 1000 also for incorrect answer(s) stores information for display position of each character, e.g. type “0”, “B”, “E” it's depth and size corresponding to visual acuity, and eye gaze direction(s) ,displayed in chart, paragraphs [40-44, 70-72,77,89,118,119,123-124]); and Repeat (repeating) steps (a) to (e) using a next image (as 1800 successively displays images e.g. 62, 64, 66, collecting user input and storing data, e.g. paragraphs [40-42;75-80, 113-121,128-142,152], Figs. 1A, 5-9), wherein the next image represents at least one next optotype (i.e. as different character, e.g. second character, in successive image, Figs. 1A, 5-7) whose size is a size of the given optotype that was represented in the given image (i.e. as characters preset size, e.g. of first character, and successive character displayed, e.g. in size corresponding to visual acuity of 1.0, paragraphs [40-41, 70-71,81,82]); and (f) process the collected information (i.e. stored information on character type, position, depth, user eye gaze directions, paragraphs [40-44, 70-72,77,89,118-124]) to determine one or more optical powers specific to the user and to calibrate the eye-tracking means simultaneously (as 1000 with 1800 determines refractive power of varifocal lens with respect to user while performing gaze tracking sensor calibration 1500, see paragraphs [38-41, 128], Figs. 1A,8-9). But Shin is silent that the next optotype size is larger than a size of the given optotype that was represented in the given image (i.e. as second/successive character size have preset or particular size, or correspond to specific visual acuity, e.g. paragraphs [40-41,70-71,81,82]; however, providing optotype characters in different e.g. larger (or smaller size) to improve determination of visual acuity of the user using hear mounted AR device, is known in the art). However, Raviv teaches in the same field of invention of eye examination method and apparatus therefor (i.e. as system with virtual reality headset, display and adjustable lens assembly for selectively adjusting optical parameters, see Figs. 2-11, abstract, paragraphs [01, 23-44, 109-116]), and further teaches that the next optotype size is larger than a size of the given optotype that was represented in the given image (i.e. as for character element size, e.g. 304 is increased/decreased when response is incorrect (or correct in which case it is decreased), see paragraphs [106-116], Fig. 11, and thus providing detection of the lowest detectable threshold of a visual signal under given visual conditions, set by the lens assembly, and further increase accuracy of eye examination, paragraphs [116,106]). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adapt and modify the device with is processor and method performed by device and processor of Shin to include eye examination where the next optotype size is larger than a size of the given optotype that was represented in the given image according to teachings of Raviv in order to provide detection of the lowest detectable threshold of a visual signal under given visual conditions and further increase accuracy of eye examination, (see Raviv, paragraphs [116,106]). Regarding claims 2 and 13, the Shin-Raviv combination teaches the inventio as set forth above, and Shin teaches (see Figs. 1-9) the at least one processor (110, 206) is configured to (method comprises), when it is determined that the user is able to clearly view the given optotype, repeat steps (a) to (e) using another next image (i.e. repeating the above steps with next image with successive, second character e.g. paragraphs [40-42;75-80, 113-121,128-142,152], Figs. 1A, 5-9), wherein the another next image (successive, second character in successive image, Figs. 1A, 5-9) represents at least one other next optotype (i.e. different character, see Figs. 1A, 5-7) whose size is smaller than the size of the given optotype that was represented in the given image (as due to combination, the character element size, e.g. 304 is decreased when response is correct thus providing detection of the lowest detectable threshold of a visual signal under given visual conditions, set by the lens assembly, and further increase accuracy of eye examination, paragraphs [116,106]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adapt and modify the device with is processor and method performed by device and processor of Shin to include eye examination where the next optotype size is decreased in size from the size of given optotype that was represented in the given image according to teachings of Raviv in order to provide detection of the lowest detectable threshold of a visual signal under given visual conditions and further increase accuracy of eye examination (see Raviv, paragraphs [116,106]). Regarding claims 3 and 14, the Shin-Raviv combination teaches the inventio as set forth above, and Shin teaches (see Figs. 1-9) further comprises an active optical element per eye (varifocal lens 1350 for each eye, e.g. 1350L, 1350R, controlled by 1800, paragraphs [48, 50,67,69-71, 78-81,90-98, 104,107], Figs. 2,3-4), wherein the at least one processor (1800) is configured to (method comprises): control the active optical element to produce at least one of the one or more optical powers (i.e. as 1800 controls 1350 for different optical powers, see e.g. paragraphs [48, 50,67,69-71, 78-81,90-98]); repeat steps (a) to (e) (as 1800 performs the above steps, e.g. paragraphs [40-42;75-80, 113-121,128-142,152], Figs. 1A, 5-9); and process newly-collected information to further calibrate the eye-tracking means (as 1000 with 1800 stores information from determining refractive power with respect to user and performs gaze tracking sensor 1500 calibration with successive images, second image/characters, and stored information, see paragraphs [38-49, 51-55, 57-72, 80-93,101-106, 113-121,128-142, 144-153], Figs. 1A,2,8-9). Regarding claims 4 and 15, the Shin-Raviv combination teaches the inventio as set forth above, and Shin teaches (see Figs. 1-9) that the at least one processor (1800) is configured to (method comprises) to adjust the at least one of the one or more optical powers, when it is determined that the user is not able to clearly view a given optotype (i.e. as device 1000 may adjust the refractive power of the varifocal lens 1350 before displaying a next character, in case of incorrect input by the user, e.g. paragraphs [40-42;75-80, 113-121,128-142]). Regarding claim 5, the Shin-Raviv combination teaches the inventio as set forth above, and Shin teaches (see Figs. 1-9) that the at least one optotype comprises at least one of: a Landolt C optotype, a Snellen optotype, a Sloan optotype, an Early Treatment Diabetic Retinopathy Study (ETDRS) optotype, an HOTV letter, an Allen figure, an alphabet character, a number, a symbol, a design, a freeform shape (i.e. as character is number or alphabetical character, e.g. “B”,”E”, “O”, symbol, “1”, e.g. paragraphs [111-126, 71], Figs. 1A-B, 5-7). Regarding claim 6, the Shin-Raviv combination teaches the inventio as set forth above, and Shin teaches (see Figs. 1-9) that the user (user) views the screen (1300,1320) from a predefined distance lying in a range of 25 centimeters to 750 centimeters (i.e. as 1300 wit 1310,1320 project virtual images, on virtual screen for the user, as normal human vision focuses from 25 cm and can see object in such range, see paragraphs [02,04,29-32, 44,52-54, 71], e.g. Fig. 1A). Regarding claim 7, the Shin-Raviv combination teaches the inventio as set forth above, and Shin teaches (see Figs. 1-9) that a given user input is obtained via at least one input means communicably coupled to the at least one processor (as user input given by input device 1100, microphone 1200, coupled to 1800, paragraphs [48-50, 69,73,85,132], Fig. 3). Regarding claim 8, the Shin-Raviv combination teaches the inventio as set forth above, and Shin teaches (see Figs. 1-9) further comprising: an active optical element per eye (varifocal lens 1350 for each eye, e.g. 1350L, 1350R, controlled by 1800, paragraphs [48, 50,67,69-71, 78-81,90-98, 104,107], Figs. 2,3-4); a frame (110, temples 113, 190L,R) employed to hold the active optical element per eye (110 holding 1350, paragraphs [102-105], Fig. 3); and at least one input means (1100,1200), mounted on a temple of the frame, that is to be used by the user for providing the given user input (i.e. as input device 1100, 1200, part of 1000 with 1350, coupled to 1800, which is on 110 with 130,190, paragraphs [48-50, 69,73,85,99-106,132], as depicted Figs. 2, 3). Regarding claims 9 and 16, the Shin-Raviv combination teaches the inventio as set forth above, and Shin teaches (see Figs. 1-9) further comprising an active optical element per eye (varifocal lens 1350 for each eye, e.g. 1350L, 1350R, controlled by 1800, paragraphs [48, 50,67,69-71, 78-81,90-98, 104,107], Figs. 2,3-4), wherein the at least one processor (1800) is further configured to (method comprises): process eye-tracking data, collected by the eye-tracking means (as 1800, 1000 is configured to process input data from 1500 , paragraphs [40-41,67-71,73-85]), to determine at least one of: gaze directions of the user's eyes, a gaze point of the user (i.e. as gaze tracking sensor 1500 with 1800 determines gaze direction, and gaze point(s) of the user as gaze information, paragraphs [33,36, 40-41,59, 67-71,73-85]); determine a given optical depth at which the user is gazing (i.e. depth of presented character by 1800, while collecting gaze tracking data and determine depth values of first character, paragraphs [67-71]), based on at least one of: an angle of convergence of the gaze directions of the user's eyes, an interpupillary distance of the user's eyes, the gaze point of the user (i.e. as gaze tracking data including gaze pints and direction(s), paragraphs [33,36, 40-41,59, 67-71]); and select at least one of the one or more optical powers to be produced by the active optical element, based on the given optical depth at which the user is gazing (i.e. as 1800 determines/selects (first/second) refractive power of varifocal lens, abstract, paragraphs [39-45, 55,68-71,78-81,90-99. Regarding claim 10, the Shin-Raviv combination teaches the inventio as set forth above, and Shin teaches (see Figs. 1-9) a system (i.e. as the device e.g. AR 1000 performing method e.g. 10,20 for correcting the vision of a user and performing calibration, abstract, paragraphs [01, 05-12, 32-49, 51-55, 57-72, 80-93,101-106, 113-121,128-142, 144-153], e.g. Figs. 1-3, 5-9) comprising: the optical apparatus according to claim 1 (1000, see claim 1 above);the screen that is to be employed to display images (screen display 1300, e.g. waveguide/combiner screen 1320, of 1000 displaying character(s) images of 50, 60, 70, e.g. paragraphs [41-44, 48, 113-121,128-142], Figs. 1A-3, 5-8); and at least one input means that is to be employed to obtain user input (1100, 1200, GUI, part of 1000 with 1350, coupled to 1800, , paragraphs [48-50, 69,73,85,99-106,132], as depicted Figs. 2, 3). Regarding claim 11, the Shin-Raviv combination teaches the inventio as set forth above, and Shin teaches (see Figs. 1-9) the system (i.e. as the device e.g. AR 1000 with its method) of claim 10, wherein the screen (1300,1320 displaying virtual image(s) on screen, ) is one of: a display of a device of the user or a medical professional, a projection surface of a projector (i.e. as device 1000 e.g. e augmented reality device of the user, Figs. 1A, 3, paragraphs [01, 05-12, 32-49, 51-55, 57-72, 80-93,101-106, 113-121,128-142, 144-153]). Response to Arguments Applicant's arguments filed in the remarks dated 01/14/2026 with respect to independent claims 1 and 12 have been fully considered but they are not persuasive. Specifically, Applicant argues on pages of the Remarks that the cited prior art of Shin alone or in combination with cited prior art of Raviv does not disclose the new amended features of claims 1 and 12, namely that (1) “obtain a given user input selected from one of: as and when the user is able to clearly view a given optotype represented in the given image, or that the user is unable to clearly view the given optotype, while using the optical apparatus,” because the cited portions of Shin allegedly disclose receiving voice inputs that correctly or incorrectly identify characters, as the amended limitation is directed to whether an image is clear to a user as opposed to whether the user is capable of correctly identifying a particular character. The Examiner respectfully disagrees. With respect to the above issue, as noted in the rejections above, the cited prior art of Shin teaches most limitations and in combination with cited prior art of Raviv teaches and renders obvious all limitations of claims 1 and 12, as Shin teaches (see Figs. 1-9) an optical apparatus and applying mutatis mutandis to a method implemented by the optical apparatus (i.e. device e.g. AR 1000 and method e.g. 10,20 for correcting the vision of a user and performing calibration, abstract, paragraphs [01, 05-12, 29,32-49, 51-55, 57-72, 80-93,101-106, 113-121,128-142, 144-153], e.g. Figs. 1-3, 5-9) comprising: eye-tracking means (gaze tracking sensor 1500, paragraphs [38-41,57-62,68-72], Figs. 2-3); and at least one processor (processor 1800 performing the method, paragraphs [48, 67-87], Fig. 2, 8-9) configured to (method comprising): (a) display(displaying) a given image on a screen (display vision measurement chart with characters objects on screen display 1300, e.g. waveguide/combiner screen 1320, paragraphs [41-44, 48, 113-121,128-142], Figs. 1A-3, 5-8) the given image representing at least one optotype (i.e. at least one character of the vision test chart, paragraphs [29,41-44,54, 113-121,128-142]); (b) obtain (obtaining) a given user input (1000,1800 obtains input of user, Figs. 1A,8-9) selected from one of: as and when the user is able to clearly view a given optotype represented in the given image, or that the user is unable to clearly view the given optotype, while using the optical apparatus (i.e. ..receive at least one answer input by the user viewing the virtual image clearly, as character on image appear less or only moderately blurry to the user, and thus user may correctly identifying the image as depicted in Figs. 7 and 6, and obtain/ receive at least one answer input by the user viewing the virtual image not clearly as the displayed character on image appear significantly blurry to the user, and thus incorrectly identifying the image as depicted in Fig. 5, paragraphs [40-44, 113-121,128-142], Figs. 1A-2, 5-8); (c) determine (determining), based on the given user input (user input), whether the user is able to clearly view the given optotype represented in the given image (i.e. 100,1800 determining, user inputs a correct or an incorrect answer, paragraphs [40-42;75-80, 113-121,128-142]); (d) when it is determined that the user is able to clearly view the given optotype (when .. user input correct answer, e.g. paragraphs [40-44;75-80, 113-121]), collect information (store in memory 1700 of 1000 device, paragraphs [44,64]) indicative of at least one of: a size of the given optotype that the user is able to clearly view, a position of the given optotype and/or optotype features in the given image, an appearance of features of the user's eyes when the user is able to clearly view the given optotype that is displayed at said position (i.e. as 1000 for correct answer(s) stores information for display position of each character, e.g. type “0”, “B”, it's depth and size corresponding to visual acuity, and eye gaze direction(s) ,displayed in chart, paragraphs [40-44, 70-72,118,121,122]); (e) when it is determined that the user is not able to clearly view the given optotype (incorrect input by user for displayed character, e.g. paragraphs [40-44;75-80, 113-121,124]), collect information (store in memory 1700 of 1000 device, paragraphs [44,64]) indicative of at least one of: a size of the given optotype that the user is not able to clearly view, a position of the given optotype and/or optotype features in the given image, an appearance of the features of the user's eyes when the user is not able to clearly view the given optotype that is displayed at said position (i.e. as 1000 also for incorrect answer(s) stores information for display position of each character, e.g. type “0”, “B”, “E” it's depth and size corresponding to visual acuity, and eye gaze direction(s) ,displayed in chart, paragraphs [40-44, 70-72,77,89,118,119,123-124]); and Repeat (repeating) steps (a) to (e) using a next image (as 1800 successively displays images e.g. 62, 64, 66, collecting user input and storing data, e.g. paragraphs [40-42;75-80, 113-121,128-142,152], Figs. 1A, 5-9), wherein the next image represents at least one next optotype (i.e. as different character, e.g. second character, in successive image, Figs. 1A, 5-7) whose size is a size of the given optotype that was represented in the given image (i.e. as characters preset size, e.g. of first character, and successive character displayed, e.g. in size corresponding to visual acuity of 1.0, paragraphs [40-41, 70-71,81,82]); and (f) process the collected information (i.e. stored information on character type, position, depth, user eye gaze directions, paragraphs [40-44, 70-72,77,89,118-124]) to determine one or more optical powers specific to the user and to calibrate the eye-tracking means simultaneously (as 1000 with 1800 determines refractive power of varifocal lens with respect to user while performing gaze tracking sensor calibration 1500, see paragraphs [38-41, 128], Figs. 1A,8-9). But Shin is silent that the next optotype size is larger than a size of the given optotype that was represented in the given image (i.e. as second/successive character size have preset or particular size, or correspond to specific visual acuity, e.g. paragraphs [40-41,70-71,81,82]; however, providing optotype characters in different e.g. larger (or smaller size) to improve determination of visual acuity of the user using hear mounted AR device, is known in the art). However, Raviv teaches in the same field of invention of eye examination method and apparatus therefor (i.e. as system with virtual reality headset, display and adjustable lens assembly for selectively adjusting optical parameters, see Figs. 2-11, abstract, paragraphs [01, 23-44, 109-116]), and further teaches that the next optotype size is larger than a size of the given optotype that was represented in the given image (i.e. as for character element size, e.g. 304 is increased/decreased when response is incorrect (or correct in which case it is decreased), see paragraphs [106-116], Fig. 11, and thus providing detection of the lowest detectable threshold of a visual signal under given visual conditions, set by the lens assembly, and further increase accuracy of eye examination, paragraphs [116,106]). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adapt and modify the device with is processor and method performed by device and processor of Shin to include eye examination where the next optotype size is larger than a size of the given optotype that was represented in the given image according to teachings of Raviv in order to provide detection of the lowest detectable threshold of a visual signal under given visual conditions and further increase accuracy of eye examination, (see Raviv, paragraphs [116,106]). Specifically, Shin teaches processor configured to (and method) obtain (obtaining) a given user input (1000,1800 obtains input of user, Figs. 1A,8-9) selected from one of: as and when the user is able to clearly view a given optotype represented in the given image, or that the user is unable to clearly view the given optotype, while using the optical apparatus (i.e. ..receive at least one answer input by the user viewing the virtual image clearly, as character on image appear less or only moderately blurry to the user, and thus user may correctly identifying the image as depicted in Figs. 7 and 6, and obtain/ receive at least one answer input by the user viewing the virtual image not clearly as the displayed character on image appear significantly blurry to the user, and thus incorrectly identifying the image as depicted in Fig. 5, paragraphs [40-44, 113-121,128-142], Figs. 1A-2, 5-8). Shin teaches device e.g. AR 1000 and method e.g. 10,20 for correcting the vision of a user and performing calibration, abstract, paragraphs [01, 05-12, 29,32-49, 51-55, 57-72, 80-93,101-106, 113-121,128-142, 144-153], e.g. Figs. 1-3, 5-9), where the users vision is tested and evaluated. In such apparatus and process the user is viewing images/characters/optotype features on a screen and provides user input given his/hers correct viewing, as the objects can be seen/viewed as clear or blurry, and thus correctly or incorrectly identified. The identification is based solely on vision, not some other sensory input. Hence Shin clearly teaches the above limitation where user is able to provide user output depending on seeing the image clearly, or not, i.e. less blurry, moderately blurry, significantly blurry. Applicants argument is not found persuasive. The same answers equally apply to independent claim 12. No additional substantial arguments were provided after page 7 of the Remarks dated 01/14/2026. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIN PICHLER whose telephone number is (571)272-4015. The examiner can normally be reached Monday-Friday 8:30am -5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas K Pham can be reached at (571)272-3689. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARIN PICHLER/Primary Examiner, Art Unit 2872
Read full office action

Prosecution Timeline

Aug 25, 2023
Application Filed
Sep 22, 2025
Non-Final Rejection — §103
Jan 14, 2026
Response Filed
Jan 22, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591106
CAMERA MODULE
2y 5m to grant Granted Mar 31, 2026
Patent 12578545
CAMERA MODULE
2y 5m to grant Granted Mar 17, 2026
Patent 12578544
OPTICAL ELEMENT DRIVING MECHANISM
2y 5m to grant Granted Mar 17, 2026
Patent 12572035
MOISTURE-RESISTANT EYE WEAR
2y 5m to grant Granted Mar 10, 2026
Patent 12554099
IMAGING OPTICAL LENS SYSTEM, IMAGE CAPTURING UNIT AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
63%
Grant Probability
72%
With Interview (+8.7%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 650 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month