Prosecution Insights
Last updated: April 18, 2026
Application No. 18/701,239

SYSTEM AND METHOD FOR AUTOMATIC CALIBRATION AND ALIGNMENT OF FUNDUS CAMERA DEVICE

Non-Final OA §103
Filed
Apr 14, 2024
Examiner
CHOUDHURY, MUSTAK
Art Unit
2872
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Oivi AS
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
670 granted / 795 resolved
+16.3% vs TC avg
Strong +23% interview lift
Without
With
+22.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
820
Total Applications
across all art units

Statute-Specific Performance

§101
1.1%
-38.9% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 795 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Preliminary Amendment Preliminary Amendment that was filed on 04/16/2024 is entered. Claim Objections Claims 1, 5 and 6 are objected to because of the following reasons: All reference numerals in parenthesis should be crossed out e.g., The term “an error” is unclear. Specification fails to define the term clearly. Please revise and clarify the claim language as appropriate. Claims 5 and 6 are hybrid claims that claims both apparatus and method steps in the same claim. Appropriate correction is required. Claim 5 the recitation “means adapted for carrying out all the steps of a method of validating alignment of an image sensor of a camera” should read “a system for validating alignment of an image sensor of a camera comprising, validating….” for clarity. Claim 6 the recitation “a computer program comprising instructions for carrying out all the steps of a method, when said computer program is executed on a computer system, the method being a method of validating alignment of an image sensor of a camera” should read “a computer program is executed on a computer system comprising validating alignment of an image sensor of a camera” for clarity. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “means adapted for carrying out all the steps of a method of validating alignment of an image sensor of a camera” in claim 5. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6 are rejected under 35 U.S.C. 103 as being unpatentable over FUKUMA et al. (US PUB 2014/0320809; herein after “Fukuma”) in view of Lin et al. (US PUB 2016/0073092; herein after “Lin”). Regarding claim 1, Fukuma teaches a method of validating alignment (para. [0189]) of an image sensor (CCD 35, FIG. 1) of a camera (300A, FIG. 4A) and an illumination projection (MIP (Maximum Intensity Projection, para. [0136]) for use in a system (10) for automatically aligning a camera device (a camera unit 2) which includes the camera (i.e., illumination optical system 10 irradiates an illumination light to the fundus Ef. The imaging optical system 30 guides a fundus reflected light of the illumination light to imaging devices (CCD image sensors, para. [0062] … automatic alignment is possible using anterior eye cameras 300, para. [0071], as shown in FIG. 1-4, also see para. [0083 to [0086]), wherein a centre of the image sensor is identified (for example, the center of the fame, para. [0156]); and the camera device (2) also includes a stereo camera (i.e., it is possible to perform stereo photographing (via a stereo camera) using two imaging parts, para. [0269]), in addition to the camera (300), and the camera device (2) includes an illumination source (11) wherein a centre of an illumination projection is identified in an image from the image sensor or from the stereo camera (for example, the observation light source 11 or the light source unit 101) onto an eye fundus, para. [0262], also see para. [0060], [170] and [0190], FIG. 1), the method comprising the steps of (S1-S42, FIG. 6-7B): aligning, using the stereo camera (para. [0269]), the centre of the illumination projection with the centre of an eye calibration target (i.e., the alignment optical system 50 generates a target (alignment target) for position matching of the optical system (examination optical system) with respect to the eye E (alignment), para. [0069], also see para. [0028], [0112] and [0142]) wherein the eye calibration target is included on one of the planes of a multi-planar calibration target (i.e., (Multi-Planar Reconstruction) and the like, and includes a process of extracting picture elements (voxels) located at a designated cross section para. [0138]: and the pair of the alignment targets is displayed over a prescribed position of the frame (for example, the center of the frame), para. [0156]), wherein each plane of the multi-planar calibration target is embedded with a plurality of fiducial markers (e.g., depth marker 1102 para. [0235], FIG. 8I), wherein the eye calibration target is a marker (i.e., The alignment optical system is configured to project an alignment target on the anterior eye part, para. [0270]), and wherein a size of the eye calibration target is approximately equal to the size of a pupil of an eye (i.e., the photographing magnifications and/or the display magnifications are/is different, the apparatus can adjust the display size of the interest area information based on the difference(s) in the magnification(s) para. [0304], also see para. [0156], [0270]); capturing an image (3000, FIG. 8I) of the eye calibration target using the camera (300) and detecting the position of the eye calibration target (i.e., an arithmetic and control unit 200 analyzes the position of the alignment target to move the optical system (automatic alignment), automatic alignment is possible using anterior eye cameras 300, para. [0071], also see para. [0068], [0117]); calculating an error (displacement) between the centre of the captured image and the centre of the detected eye calibration target (i.e., a first displacement calculator configured to calculate a displacement between the eye and the optical system by analyzing two images substantially simultaneously obtained by the two imaging parts, and the controller is configured to carry out the first control based on the displacement calculated by the first displacement calculator, para. [0027], also see para. [0146], FIG. 5A-5B A, [0155]); and validating if the error is within a predefined threshold (para. [019]-[0130]); wherein the method further comprises calibration of a position of illumination and the eye calibration target, including steps of: aligning the centre of the illumination projection to the centre of the eye calibration target (i.e., alignment includes the action of aligning the light axis of the optical system of the apparatus with respect to the axis of an eye (xy alignment), as well as the action of adjusting the distance between the eye and the optical system of the apparatus (z alignment), para. [0006], also see para. [0156]); determining an optimal projection of illumination by moving a movable platform in a perpendicular direction to the plane of a centre of the eye calibration target (i.e., movement direction of the examination optical system (a movable platform) is set in the direction perpendicular to the OCT scanning direction in Auto-Z, para. [0218], also see para. [0219-[0121], FIG. 9A-9B), wherein the stereo camera, the camera and the illumination source are mounted on the moveable platform (i.e., in FIG. 4A and FIG. 4B, symbol 410 indicates a base in which a drive system such as an optical system driver 2A, etc. and arithmetic and control circuits are accommodated (mounted), para. [0061]; also see para. [0161], FIG. 3-4 and para. [0205], FIG. 8C); and identifying optimal coordinates of the eye calibration target for capturing an image by the camera and saving it in a database (para. [0103]-[0104], FIG. 3 and [0109]), wherein the optimal coordinates are the coordinates of the eye calibration target as calculated by the stereo camera (i.e., process may be carried out by comparing the coordinates corresponding to the predetermined area and the coordinates of the characteristic point, para. [0160]; and the process corresponds to the automatic alignment based on the image obtained by the stereo photographing, para. [0269], also see para. [0299]), and also the coordinates of the pupil where a captured image of a fundus of the eye is of the highest quality (i.e., positional information (e.g. coordinates) of the interest image region in the eye image; relative positional information indicating the position of the interest image region relative to the image region corresponding to a prescribed site of the eye E (e.g. pupil), para. [0299]; and automatically carrying out the image quality optimization, para. [0241], also see para. [0060], [0135]). Fukuma teaches all limitations except for explicit teaching of aligning the centre of the illumination projection to the centre of the eye calibration target, and each plane of the multi-planar calibration target is embedded with a plurality of fiducial markers, wherein the eye calibration target is a marker. However, in a related field of endeavor Lin teaches a calibration method of a stereo camera, wherein the stereo camera includes a left eye image capture unit and a right eye image capture unit. The calibration method includes transmitting a group of system parameters of the stereo camera to a server; downloading a calibration pattern corresponding to the group of system parameters from the server; calculating a plurality of camera calibration parameters corresponding to the stereo camera according to the calibration pattern; and executing an image rectification operation on the left eye image capture unit and the right eye image capture unit according to the plurality of camera calibration parameters, respectively, para. [0007]. In addition, when the second display 110 displays the left eye image 140 corresponding to the calibration pattern 130, the processor 1026 can utilize 4 alignment markers 1102 included in a viewfinder (not shown in FIG. 1) of the second display 110 and images in the left eye image 140 corresponding to the 4 position markers 1062 to make the calibration pattern 130 be located within an image calibration range of the stereo camera 102, wherein a number of alignment markers of the second display 110 is equal to a number of position markers of the first display 106, para. [0029], FIG. 1. In Step 212, the processor 1026 can generate projection parameters corresponding to the calibration pattern 130 according to the left eye image 140 and the right eye image corresponding to the calibration pattern 130, respectively, wherein in one embodiment of the present invention, the present invention utilizes a plurality of feature points of the left eye image 140 and the right eye image corresponding to the calibration pattern 130 to generate the projection parameters corresponding to the calibration pattern 130. As shown in FIG. 1, the calibration pattern 130 has 9 feature points FP1-FP9, para. [0030], FIG. 1-2. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Fukuma such that image calibration method of a stereo camera, where a processor can utilize 4 alignment markers on a second display and images in the left eye image corresponding to the 4 position markers to make the calibration pattern be located within an image calibration range of the stereo camera as taught by Lin, for the purpose of having optical accuracy of a stereo camera that is more convenient, lower cost, and without professional skills for the user. Regarding claim 2, Fukuma according to claim 1 further teaches determining one or more contour properties of the illumination projection, wherein the contour properties are captured when calibration of the optimal position of the camera device with the illumination projection is successfully performed, and wherein an image in which the contour properties are determined is taken from the image sensor or from the stereo camera (i.e., the pupil is substantially circular; therefore, it is possible to specify the contour of the pupillary region, specify the center position of this contour (an approximate circle or an approximate ellipse thereof), para. [0144] and [0159]). Regarding claim 3, Fukuma according to claim 2 further teaches the contour properties of the illumination projection comprise at least one of centre, radius, and circularity (i.e., the pupil is substantially circular; therefore, it is possible to specify the contour of the pupillary region, specify the center position of this contour (an approximate circle or an approximate ellipse thereof), para. [0144] and [0159]). Regarding claim 4, Fukuma according to claim 1 further teaches the identifying optimal coordinates step uses the illumination projection (MIP) in the aligning step (i.e., the image processor 230 executes a rendering process (such as volume rendering and MIP (Maximum Intensity Projection)) on this volume data to form image data of a pseudo three-dimensional image taken from a specific view direction (e.g., identifying optimal coordinates), para. [0136], FIG. 3). Regarding claim 5, Fukuma in view of Lin teaches all of the claimed limitations of the instant invention as outlined above with respect to independent claim 1 and further teaches: a system (an ophthalmologic apparatus 1, as shown in FIG. 1, para. [0059]) comprising means adapted for carrying out all the steps of a method of validating alignment of an image sensor of a camera (130) and an illumination projection for use in a system for automatically aligning a camera device (121) which includes the camera (130), wherein a centre of the image sensor is identified; and the camera device (121) also includes a stereo camera (120, 125), in addition to the camera (130), and the camera device (121) includes an illumination source (127) wherein a centre of an illumination projection is identified in an image from the image sensor or from the stereo camera, the method comprising the steps of: aligning, using the stereo camera (120, 125), the centre of the illumination projection with the centre of an eye calibration target (115) wherein the eye calibration target is included on one of the planes of a multi-planar calibration target (110), wherein each plane of the multi-planar calibration target is embedded with a plurality of fiducial markers, wherein the eye calibration target is a marker, and wherein a size of the eye calibration target is approximately equal to the size of a pupil of an eye; capturing an image of the eye calibration target using the camera (130) and detecting the position of the eye calibration target; calculating an error between the centre of the captured image and the centre of the detected eye calibration target; and validating if the error is within a predefined threshold; wherein the method further comprises calibration of a position of illumination and the eye calibration target, including steps of: aligning the centre of the illumination projection to the centre of the eye calibration target; determining an optimal projection of illumination by moving a movable platform (132) in a perpendicular direction to the plane of a centre of the eye calibration target, wherein the stereo camera, the camera and the illumination source are mounted on the moveable platform; and identifying optimal coordinates of the eye calibration target for capturing an image by the camera (130) and saving it in a database, wherein the optimal coordinates are the coordinates of the eye calibration target as calculated by the stereo camera, and also the coordinates of the pupil where a captured image of a fundus of the eye is of the highest quality (all limitations of claim 5 are interpreted and rejected for the same reason as set forth in claim 1 above). Fukuma teaches all limitations except for explicit teaching of aligning the centre of the illumination projection to the centre of the eye calibration target, and determining an optimal projection of illumination by moving a movable platform in a perpendicular direction to the plane of a centre of the eye calibration target, wherein the stereo camera, the camera and the illumination source are mounted on the moveable platform. However, in a related field of endeavor Lin teaches an image calibration system. The image calibration system includes a stereo camera. The stereo camera has a left eye image capture unit, a right eye image capture unit, and a processor, wherein the processor transmits a group of system parameters of the stereo camera to a server, and downloads a calibration pattern corresponding to the group of system parameters to a display from the server after the server receives the group of system parameters. The processor or the server calculates a plurality of camera calibration parameters corresponding to the stereo camera according to the calibration pattern, and the processor executes an image rectification operation on the left eye image capture unit and the right eye image capture unit according to the plurality of camera calibration parameters, respectively, para. [0008]. In addition, when the second display 110 displays the left eye image 140 corresponding to the calibration pattern 130, the processor 1026 can utilize 4 alignment markers 1102 included in a viewfinder (not shown in FIG. 1) of the second display 110 and images in the left eye image 140 corresponding to the 4 position markers 1062 to make the calibration pattern 130 be located within an image calibration range of the stereo camera 102, wherein a number of alignment markers of the second display 110 is equal to a number of position markers of the first display 106, para. [0029], FIG. 1. In Step 212, the processor 1026 can generate projection parameters corresponding to the calibration pattern 130 according to the left eye image 140 and the right eye image corresponding to the calibration pattern 130, respectively, wherein in one embodiment of the present invention, the present invention utilizes a plurality of feature points of the left eye image 140 and the right eye image corresponding to the calibration pattern 130 to generate the projection parameters corresponding to the calibration pattern 130. As shown in FIG. 1, the calibration pattern 130 has 9 feature points FP1-FP9, para. [0030], FIG. 1-2. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Fukuma such that image calibration system includes a stereo camera, where a processor can utilize 4 alignment markers on a second display and images in the left eye image corresponding to the 4 position markers to make the calibration pattern be located within an image calibration range of the stereo camera as taught by Lin, for the purpose of having optical accuracy of a stereo camera that is more convenient, lower cost, and without professional skills for the user. Regarding claim 6, Fukuma in view of Lin teaches all of the claimed limitations of the instant invention as outlined above with respect to independent claim 1 and further teaches: a computer program comprising instructions for carrying out all the steps of a method, when said computer program is executed on a computer system (an ophthalmologic apparatus 1, as shown in FIG. 1 … The arithmetic and control unit 200 is provided with a computer that executes various arithmetic processes, control processes, and so on, FIG. 3para. [0059]), the method being a method of validating alignment of an image sensor of a camera (130) and an illumination projection for use in a system for automatically aligning a camera device (121) which includes the camera (130), wherein a centre of the image sensor is identified; and the camera device (121) also includes a stereo camera (120, 125), in addition to the camera (130), and the camera device (121) includes an illumination source (127) wherein a centre of an illumination projection is identified in an image from the image sensor or from the stereo camera, the method comprising the steps of: aligning, using the stereo camera (120, 125), the centre of the illumination projection with the centre of an eye calibration target (115) wherein the eye calibration target is included on one of the planes of a multi-planar calibration target (110), wherein each plane of the multi-planar calibration target is embedded with a plurality of fiducial markers, wherein the eye calibration target is a marker, and wherein a size of the eye calibration target is approximately equal to the size of a pupil of an eye; capturing an image of the eye calibration target using the camera (130) and detecting the position of the eye calibration target; calculating an error between the centre of the captured image and the centre of the detected eye calibration target; and validating if the error is within a predefined threshold; wherein the method further comprises calibration of a position of illumination and the eye calibration target, including steps of: aligning the centre of the illumination projection to the centre of the eye calibration target; determining an optimal projection of illumination by moving a movable platform (132) in a perpendicular direction to the plane of a centre of the eye calibration target, wherein the stereo camera, the camera and the illumination source are mounted on the moveable platform; and identifying optimal coordinates of the eye calibration target for capturing an image by the camera (130) and saving it in a database, wherein the optimal coordinates are the coordinates of the eye calibration target as calculated by the stereo camera, and also the coordinates of the pupil where a captured image of a fundus of the eye is of the highest quality (all limitations of claim 6 are interpreted and rejected for the same reason as set forth in claim 1 above). Fukuma teaches all limitations except for explicit teaching of aligning the centre of the illumination projection to the centre of the eye calibration target, and determining an optimal projection of illumination by moving a movable platform in a perpendicular direction to the plane of a centre of the eye calibration target, wherein the stereo camera, the camera and the illumination source are mounted on the moveable platform. However, in a related field of endeavor Lin teaches an image calibration system. The image calibration system includes a stereo camera. The stereo camera has a left eye image capture unit, a right eye image capture unit, and a processor, wherein the processor transmits a group of system parameters of the stereo camera to a server, and downloads a calibration pattern corresponding to the group of system parameters to a display from the server after the server receives the group of system parameters. The processor or the server calculates a plurality of camera calibration parameters corresponding to the stereo camera according to the calibration pattern, and the processor executes an image rectification operation on the left eye image capture unit and the right eye image capture unit according to the plurality of camera calibration parameters, respectively, para. [0008]. In addition, when the second display 110 displays the left eye image 140 corresponding to the calibration pattern 130, the processor 1026 can utilize 4 alignment markers 1102 included in a viewfinder (not shown in FIG. 1) of the second display 110 and images in the left eye image 140 corresponding to the 4 position markers 1062 to make the calibration pattern 130 be located within an image calibration range of the stereo camera 102, wherein a number of alignment markers of the second display 110 is equal to a number of position markers of the first display 106, para. [0029], FIG. 1. In Step 212, the processor 1026 can generate projection parameters corresponding to the calibration pattern 130 according to the left eye image 140 and the right eye image corresponding to the calibration pattern 130, respectively, wherein in one embodiment of the present invention, the present invention utilizes a plurality of feature points of the left eye image 140 and the right eye image corresponding to the calibration pattern 130 to generate the projection parameters corresponding to the calibration pattern 130. As shown in FIG. 1, the calibration pattern 130 has 9 feature points FP1-FP9, para. [0030], FIG. 1-2. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Fukuma such that image calibration system includes a stereo camera, where a processor can utilize 4 alignment markers on a second display and images in the left eye image corresponding to the 4 position markers to make the calibration pattern be located within an image calibration range of the stereo camera as taught by Lin, for the purpose of having optical accuracy of a stereo camera that is more convenient, lower cost, and without professional skills for the user. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ghaly et al. (US PUB 20170287221) teaches “an image can be captured by the camera 102 from the perspective of the predetermined pose, for example. Note that the captured image may be an augmented-reality image including one or more virtual object (e.g., virtual truck 114) overlaid on the physical space 106 or the captured image may not include any virtual objects or otherwise virtual augmentation. In some implementations, the image may be automatically captured responsive to alignment being achieved., para. [0036], FIG. 1-4. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MUSTAK CHOUDHURY whose telephone number is (571)272-5247. The examiner can normally be reached on M-F 8AM-5PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ricky Mack can be reached on (571)272-2333. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MUSTAK CHOUDHURY/Primary Examiner, Art Unit 2872 March 25, 2026
Read full office action

Prosecution Timeline

Apr 14, 2024
Application Filed
Mar 30, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601903
OPTICAL DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12596244
SURGICAL MICROSCOPE SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12585102
SLIDE HOLDER AND SLIDE-HOLDER SUPPORT STRUCTURE
2y 5m to grant Granted Mar 24, 2026
Patent 12572011
OPTICAL SYSTEM WITH CROSS TRACK ERROR REDUCTION
2y 5m to grant Granted Mar 10, 2026
Patent 12566339
Treating Ocular Refractive Error
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+22.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 795 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month