Prosecution Insights
Last updated: April 19, 2026
Application No. 17/915,442

REAL-TIME IR FUNDUS IMAGE TRACKING IN THE PRESENCE OF ARTIFACTS USING A REFERENCE LANDMARK

Non-Final OA §102§103
Filed
Sep 28, 2022
Examiner
FELIX, BRADLEY OBAS
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Carl Zeiss Meditec AG
OA Round
3 (Non-Final)
12%
Grant Probability
At Risk
3-4
OA Rounds
3y 6m
To Grant
78%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
2 granted / 17 resolved
-50.2% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
29 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
62.9%
+22.9% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 17 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The application has pending claims 1-12 and 14-26. Claims 6-12 and 21-25 have been withdrawn, claim 13 is canceled, and new claim 26 has been added. Response to Arguments Applicant’s arguments with respect to claims 1-5 and 14-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Thus, the new rejection of Isogai discloses the newly amended limitations of claim 1. In addition, Isogai, in combination with Tomatsu, discloses the limitations of newly added claim 26. Based on these facts, this action is made NON-FINAL. Claim Objections Claim 1 objected to because of the following informalities: “defining one or more auxiliary points in the reference image; within a select live image…”. It would be clearer to move the newly added limitation of “wherein the reference-anchor point…” to the step of defining a reference-anchor point in the reference image. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 5 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Naoki Isogai JP-2010012109-A, hereinafter Isogai. As per claim 1, Isogai discloses an eye tracking method, comprising:capturing multiple images of the eye, including a reference image and one or more live images (see Isogai top of page 2/21 configuration (1), wherein an optical system to acquire eye images is disclosed. It is clarified in Isogai at the top of page 4/21, that the images, outside of the reference image, are observed and taken in real time);defining a reference-anchor point in the reference image (see Isogai middle of page 4/21 and FIG. 5, wherein a center target area 101a is set [corresponding to defining] in the reference image K. This target area contains feature points, such as blood vessels);defining one or more auxiliary points in the reference image (see Isogai page 4/21, wherein the other target areas 101b-101e around the center is also set);within a select live image, wherein the reference-anchor point is at least one of a bundle of veins, a lesion or an optical nerve head (see Isogai middle of page 4/21, wherein the target areas, which include feature points such as blood vessels, in the fundus images, reference image K and the subsequent frame, i.e., live image. The reference anchor point being the target area 101a in FIGS. 5-6. It is clarified in Isogai at the top of page 4/21, that the images are observed and taken in real time):a) identifying an initial matching point that matches the reference-anchor point (see Isogai page 4/21 and FIG. 5, wherein the control unit determines the target area 101a’ in the fundus image (the subsequent frame) corresponding to the target area 101a in the reference frame);b) searching for a match of a select auxiliary point within a region based on the location of the select auxiliary point relative to the reference anchor point (see Isogai page 4/21 and FIG. 5, wherein the surrounding target areas, i.e., auxiliary points, 100b-100e are matched to 101b’-101e’ in the subsequent frame, all of which are equidistant from the center point 101a and 101a’ respectively); andcorrecting for a tracking error between the reference image and the select live image based on their matched points (see Isogai bottom of page 4/21 and FIG. 6, wherein the size parameter, which is the averaged lengths h1 to h4, is handled as focus deviation, i.e., tracking error. The optical lens is then moved to adjust [corresponding to correcting] the focus deviation of the fundus image with respect to the reference image K). As per claim 5, Isogai discloses the method of claim 1, wherein the captured multiple images are of the retina of the eye (see Isogai page 4/21 and FIG. 5, wherein the retina portion of the eye images are disclosed). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3, 14, 16, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Isogai in further view of Nobuhiro Tomatsu US-9161690-B2, hereinafter Tomatsu. As per claim 2, Isogai discloses a plurality of the auxiliary points are defined in the reference image (see Isogai page 4/21 and FIG. 5, wherein the auxiliary points 100b-100e in reference image K is disclosed);the searching for a match of a select auxiliary point is part of a search for a match of auxiliary points in the select live image (see Isogai page 4/21 and FIG. 5, wherein the auxiliary points 100b-100e are matched to the same target areas, 100b’-100e’, in the subsequent frame image); However, Isogai fails to explicitly disclose where Tomatsu teaches:in response to the number of matched auxiliary points in the select current image not being greater than a predefined minimum, the select current image is not corrected for tracking error (see Tomatsu col. 5 lines 54-67 and col. 6 lines 1-8 and FIG. 2, wherein a similarity is calculated using the searching points, i.e., auxiliary points, between the second fundus image, which is the current acquired image as disclosed in col. 5 lines 28-31, and reference image using a threshold, i.e., tracking error. If similarity is the threshold or below, the matching is considered completed and not correct. Otherwise, the image is corrected as further disclosed in col. 7 lines 11-17). Further, while Tomatsu discloses a current image, it would have been obvious to one of ordinary skill in the art to substitute Tomatsu’s current image to be a live, or real time, image as previously disclosed by Isogai. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Isogai’s method by using Tomatsu’s teaching by including a predefined minimum for tracking error to the auxiliary points in order to limit extraneous points from being the matching auxiliary points. As per claim 3, Isogai, in combination with Tomatsu, discloses the method of claim 2, wherein the predefined minimum is greater than half the plurality of the auxiliary points (see Tomatsu col. 5 lines 54-67 and col. 6 lines 37-41 and FIG. 3A-3C, wherein the threshold can be arbitrarily set by an operator). As per claim 14, Isogai fails to explicitly disclose where Tomatsu teaches:the best matched candidate anchor point is the candidate whose match in the select current image has the highest confidence (see Tomatsu col. 5 lines 54-67 and col. 6 lines 1-8 and FIGS. 2 and 3A-3C, wherein the similarity threshold is used to match the current fundus image to the template image using the characteristic point, i.e., candidate anchor point, which can be a crossing of blood vessels, disclosed in col. 5 lines 2-26. See further Tomatsu col. 8 lines 48-50, wherein the threshold can be arbitrarily set). While Tomatsu discloses a current image, it would have been obvious to one of ordinary skill in the art to substitute Tomatsu’s current image to be a live, or real time, image as previously disclosed by Isogai. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Isogai’s method by using Tomatsu’s teaching by including a highest confidence to the matching candidate anchor point in order to guarantee a best match between images. As per claim 16, Isogai fails to explicitly disclose where Tomatsu teaches:searching for matches of the plurality of candidate anchor points in a plurality of the current images (see Tomatsu col. 5 lines 34-53 and FIGS. 3A-3C, wherein the current fundus image is searched and the characteristic points, i.e., candidate anchor points, are matched with the template images);designating as the reference-anchor point, the candidate anchor point best matched in the plurality of select current images (see Tomatsu col. 54-67 and FIG. 2, wherein the matching is considered complete when the searching points between the template image and the current fundus image surpasses a similarity threshold and becomes the new template image). While Tomatsu discloses a current image, it would have been obvious to one of ordinary skill in the art to substitute Tomatsu’s current image to be a live, or real time, image as previously disclosed by Isogai. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Isogai’s method by using Tomatsu’s teaching by including the searching and designating a candidate anchor point as the best matched point to the select live image in order to more accurately track the anchor point within the select live image. As per claim 26, Isogai fails to explicitly disclose where Tomatsu teaches:identifying a plurality of candidate anchor points in the reference image (see Tomatsu col. 5 lines 1-20, wherein the characteristic points, i.e., candidate anchor points, are selected with the first fundus image, or template image, i.e., reference image)searching for matches of the plurality of candidate anchor points in a plurality of the current images (see Tomatsu col. 5 lines 34-53 and FIGS. 3A-3C, wherein the current fundus image is searched and the points are matched with the template images);designating as the reference-anchor point, the candidate anchor point best matched in the plurality of select current images (see Tomatsu col. 54-67 and FIG. 2, wherein the matching is considered complete when the searching points between the template image and the current fundus image surpasses a similarity threshold and becomes the new template image). While Tomatsu discloses a current image, it would have been obvious to one of ordinary skill in the art to substitute Tomatsu’s current image to be a live, or real time, image as previously disclosed by Isogai. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Isogai’s method by using Tomatsu’s teaching by including the searching and designating a candidate anchor point as the best matched point to the select live image in order to more accurately track the anchor point within the select live image. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Isogai in further view of BAGHERINIA HOMAYOUN WO-2021219773-A1, hereinafter HOMAYOUN. As per claim 4, Isogai fails to explicitly disclose where HOMAYOUN teaches:The method of claim 1, wherein the reference image and live images are infrared images (see HOMAYOUN page 10/78, wherein the reference IR images and live IR images are disclosed). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Isogai’s method by using HOMAYOUN’s teaching by including IR imaging to the live and reference images in order to acquire more distinct images of the eye. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Isogai in further view of Vikas Getmkura CN-108846390-B, hereinafter Vikas. As per claim 15, Isogai fails to explicitly disclose wherein Vikas teaches:The method of claim 1, wherein the best matched candidate anchor point is the candidate whose match is found most quickly (see Vikas page 3/67. Wherein the interest point, i.e., candidate anchor point, is matched with minimized time. See also Vikas page 24/67, wherein the input images, i.e., the candidate which contains the anchor point, are compared against the reference template and the one most similar reaches a matching time threshold). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Isogai’s method by using Vikas’s teaching by including how quickly to match the candidate point in order to decrease the time needed to find the best candidate anchor point. Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Isogai in view of Tomatsu, in further view of Vikas Getmkura CN-108846390-B, hereinafter Vikas. As per claim 17, Isogai in combination with Tomatsu fails to explicitly disclose wherein Vikas teaches:The method of claim 16, wherein the best matched candidate anchor point is the candidate anchor point for which a match is most often found in a series of consecutive select current images (see Vikas bottom of page 3/67, wherein the candidate point matching is calculated using matching scores. See further bottom of page 4/67, wherein the candidate point is designated to be the candidate according to the summing matching score of all the N states. See also page 9/67, wherein the images can be in video mode, i.e., consecutive current images). While Vikas discloses a current image, it would have been obvious to one of ordinary skill in the art to substitute Vikas’s current image to be a live, or real time, image as previously disclosed by Isogai. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Isogai’s method by using Vikas’s teaching by including the best matched anchor point amongst to a plurality of select live images in order to more consistently acquire the best candidate anchor point throughout the select live images. As per claim 18, Isogai, Tomatsu, and Vikas, discloses the method of claim 17, wherein the series is a predefined number of live images (see Isogai page 4/21, wherein a predetermined number of frames is used. These images are taken in real time, i.e., live images). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bradley Obas Felix whose telephone number is (703)756-1314. The examiner can normally be reached M-F 8-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 5712728243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRADLEY O FELIX/Examiner, Art Unit 2671 /VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Sep 28, 2022
Application Filed
Feb 18, 2025
Non-Final Rejection — §102, §103
Jul 17, 2025
Response Filed
Aug 25, 2025
Examiner Interview Summary
Aug 25, 2025
Applicant Interview (Telephonic)
Oct 03, 2025
Final Rejection — §102, §103
Nov 11, 2025
Response after Non-Final Action
Jan 05, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Mar 24, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592076
OBJECT IDENTIFICATION SYSTEM AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12340540
AN IMAGING SENSOR, AN IMAGE PROCESSING DEVICE AND AN IMAGE PROCESSING METHOD
2y 5m to grant Granted Jun 24, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
12%
Grant Probability
78%
With Interview (+66.7%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 17 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month