Prosecution Insights
Last updated: April 19, 2026
Application No. 18/636,788

STEREOSCOPIC IMAGING SYSTEM

Non-Final OA §103§112
Filed
Apr 16, 2024
Examiner
MILLER, RONDE LEE
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Karl Storz Imaging Inc.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
16 granted / 22 resolved
+10.7% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1 – 20, all of the claims pending in this application, have been rejected. Claim Objections Claim 1 is objected to because of the following informalities: Claim 1 recites the claim language “one of a disparity, similarity and a high frequency content”. It should be written as “one of a disparity, a similarity, and a high frequency content”. Claim 2 recites the claim language “a weighted average of the disparity, similarity and high frequency information”. It should be written as “a weighted average of the disparity, the similarity and the high frequency information”. Claim 16 recites the claim language “a disparity, similarity and high frequency content”. It should be written as “a disparity, a similarity, and a high frequency content”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 – 10 and 16 – 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “setting a focus of the ROI by adjusting the adjustable focusing optics using, as a metric, one of a disparity, similarity and a high frequency content, wherein the focus of the ROI is set when at least one of the following conditions is satisfied: (1) the disparity between the left side image and the right side image is below a predetermined threshold…”. It is unclear what the Applicant meant with “one of a disparity” which is being interpreted as one of the many methods that could be used to find disparity, then reciting “the disparity between the left side image and the right side image is below a predetermined threshold”. The specific method being used isn’t mentioned in the claim language and therefore it is unclear to the Examiner as to how the art should be applied pertaining to a predetermined threshold, since not every method used in determining disparity is calculated with respect to a pre-defined threshold. It is also unclear as to whether the claim language is in reference to the same disparity or if another method of determining disparity is being used. Therefore, claim 1 is hereby rejected. Claim 16, an independent system claim, recites similar claim language. Therefore, claim 16 and all claims which depend on claim 16 are also rejected. Claim 1 recites the limitation “(3) the high frequency content in the left side image and the right side image are above a predetermined threshold.”. It is unclear as to exactly what the “high frequency content” in the limitation is in reference to. The Specification fails to explicitly define what the “high frequency content” is supposed to be representing as well. Claims 2 – 10 are rejected by virtue of their dependency on claim 1. Claim 16, an independent system claim, recites similar claim language. Therefore, claim 16 and all claims which depend on claim 16 are also rejected. Claim 2 recites the claim language “a weighted average of the disparity, similarity and high frequency information.”. As explained previously with regards to the rejection made pertaining to claim 1, it is unclear as to which method of determining a disparity was used. Claim 1 recites one of a disparity (meaning one of the various methods used to determine the disparity as a metric) which may differ from “the disparity” (a specific disparity) as presently claimed. To elaborate, is “the disparity” in the present claim language pertaining to a disparity (as a general metric) as claimed in claim 1 or the specific disparity (pertaining to the disparity determined with regards to a predetermined threshold) mentioned later in the language of that same claim. Claim 2 recites the claim language “a weighted average of the disparity, similarity and high frequency information.”. It is unclear as to what the high frequency information is in reference to as it is only mentioned once in the specification and is not explicitly defined. Claim 2 recites the limitation "high frequency information" in “The method as set forth in claim 1, wherein the focus of the ROI is set by taking a weighted average of the disparity, similarity and high frequency information.”. There is insufficient antecedent basis for this limitation in the claim. Examiner notes that claim 1 mentions high frequency content, while claim 2 mentions high frequency information. It is also unclear as to what the difference is between the two as well as to what each one is supposed to be in reference to, respectively. The Applicant’s specification fails to further explain this. Claim 17 recites similar claim language and is also rejected for the same reasons as applied to claim 2 Claim 16 recites the claim language “setting the focus of the left-side camera and the right-side camera by processing a disparity, similarity and high frequency content of the left-side image and the right-side image, wherein the focus of the ROI is set when at least one of the following conditions is satisfied: (1) a disparity between the left side image and the right side image is below a predetermined threshold.”. It is unclear if “a disparity” are different from one another of in reference to the same disparity. Claims 17 – 20 are rejected by virtue of their dependency on claim 16. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 11 – 15 are rejected under 35 U.S.C. 103 as being unpatentable over (US Publication No. 2025/0373773 A1) to ASABAN et al. (hereinafter ASABAN) in view of (US Publication No. 2011/0285826 A1) to Bickerstaff et al. (hereinafter Bickerstaff). Claim 11 Regarding Claim 11, an independent method claim, ASABAN teaches a method for focusing a stereoscopic imaging system having a left-side image sensor and a right-side image sensor, the left-side image sensor and the right-side image sensor generating a corresponding left-side image and right-side image, the left-side image and the right-side image defining a stereo image (Abstract), the method comprising: providing an optical system including an adjustable focusing optics ("The magnification of these stereoscopic image pairs is set to a desired value, which may be optionally adjusted in accordance with a user-controlled zoom input (block 178).", Paragraph [0220]); selecting a region of interest (ROI), the ROI being a region within one of the left-side image and the right-side image ("In some embodiments, the processor is configured to estimate a distance from the head-mounted unit to the ROI based on a disparity between the images captured by both the left and right video cameras, and to adjust the stereoscopic image responsively to the disparity.", Paragraph [0035]); determining a disparity between the left side image and the right side image at the ROI (Rejected as applied directly above, Paragraph [0035]); processing the depth to set a focus of the adjustable optics ("These calibration parameters or values serve as inputs for a focus calibration step 154, in which the focusing parameters of cameras 43 are calibrated against the actual distance to a target that is measured by the distance sensor or tracking device 63. A map, mapping possible distance values between the HMD and ROI to corresponding focus values may be then generated. On the basis of this calibration, it may be possible to focus both cameras 43 to the distance of ROI 24 that is indicated by the distance sensor or tracking device 63.", Paragraph [0165]); and processing the depth to set a focus of the adjustable optics (Rejected as applied directly above, Paragraph [0165]). AZABAN does not teach the optical system configured to obtain a best focus focal plane for both the left side image and the right side image that corresponds to a zero, or near zero, disparity between the right side image and the left side image in the stereo image. However, Bickerstaff teaches the optical system configured to obtain a best focus focal plane for both the left side image and the right side image that corresponds to a zero, or near zero, disparity between the right side image and the left side image in the stereo image ("It will be appreciated that a zoom lens is a lens with an adjustable focal length; that is, the distance between the lens and the focal plane (the image sensor) can be varied, thereby changing the subtended angle of the visible field of view.", Paragraph [0040]; "Referring now to FIGS. 5A to 7B inclusive, an analysis technique is described. FIGS. 5A and 5B show the left and right images from respective optical imaging systems at a minimum degree of zoom available to those optical imaging systems, giving a maximum wide-angle view of a football stadium. In this case, there is no disparity between the fields of view in the left and right images.", Paragraph [0041]); PNG media_image1.png 626 472 media_image1.png Greyscale It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of AZABAN to incorporate obtaining the best focal plane and adjusting the left and right optics until a near zero disparity is present between the left and right images, as disclosed by Bickerstaff. The suggestion/motivation for doing so would have been to potentially allow medical personnel in the surgical field the ability to be completely hands free, reducing the chances of complications while performing surgical procedures. Claim 12 Regarding Claim 12, dependent on claim 11, ASABAN, in view of Bickerstaff, teaches the invention as claimed in claim 11. ASABAN does not teach adjusting the focus until the disparity between the left side image and the right side image is below a predetermined threshold. However, Bickerstaff further teaches adjusting the focus until the disparity between the left side image and the right side image is below a predetermined threshold ("The adjustment process can use a feedback loop to make further adjustments to one or both optical imaging systems until the disparity in field of view between the optical imaging systems falls below a threshold level.", Paragraph [0067]). Claim 13 Regarding Claim 13, dependent on claim 12, ASABAN, in view of Bickerstaff, teaches the invention as claimed in claim 12. ASABAN does not teach wherein the disparity is determined by a horizontal pixel distance between corresponding regions of the left-side image and the right-side image. However, Bickerstaff further teaches wherein the disparity is determined by a horizontal pixel distance between corresponding regions of the left-side image and the right-side image ("In FIGS. 6A and 6B, the difference in the fields of view (i.e. the disparity in zoom levels) results in the periphery of the right-hand image (FIG. 6B) showing features not present in the left hand image (FIG. 6A), both horizontally 410 and vertically 420. Analysis of these feature disparities provides a means by which the 3D camera can determine the difference in field of view between the two optical imaging means.", Paragraph [0048]; "In either case (vertical or horizontal), a comparison of features at the edges of the images can be used to indicate the amount by which the field of view differs between the images (i.e. a disparity value).", Paragraph [0050]; "The horizontal shifting of the magnified images 37 (e.g., the adjusting of disparity of the images) may be accomplished in a number of ways, including using any of the horizontal shifting techniques discussed above. For example, the crop region or subset of pixels 220 used by each camera may be adjusted (see FIG. 7B).", Paragraph [0230]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of AZABAN, in view of Bickerstaff, to incorporate using a horizontal pixel distance between corresponding regions of the two images to determine the disparity, as disclosed by Bickerstaff. The suggestion/motivation for doing so would have using the disparity for horizontal shifting to align images if/when they are magnified. Claim 14 Regarding Claim 14, dependent on claim 11, ASABAN, in view of Bickerstaff, teaches the invention as claimed in claim 11. AZABAN further teaches wherein the disparity is determined based at least in part on a correlation between the left-side image and the right-side image within the ROI ("According to some embodiments, the stereoscopic tuning may be performed by shifting, relocating, or altering the image sensors' image region to provide a substantially similar or identical images, at least with respect to a determined ROI image (a plane of substantially zero parallax) and to facilitate full overlap between the left and right images.", Paragraph [0219]). Claim 15 Regarding Claim 15, dependent on claim 14, ASABAN, in view of Bickerstaff, teaches the invention as claimed in claim 14. ASABAN does not teach wherein the correlation is determined based on at least one of (i) a normalized correlation, (ii) a cross correlation, (iii) a normalized cross correlation, and (iv) a zero normalized cross correlation. However, Bickerstaff further teaches wherein the correlation is determined based on at least one of (i) a normalized correlation, (ii) a cross correlation, (iii) a normalized cross correlation, and (iv) a zero normalized cross correlation ("Similarly, for both still and video imaging, a disparity profile may be compiled to predict the scale disparity between images for each available level of zoom. The profile may contain mean and variance values for the disparities; where a detected disparity exceeds the variance by a threshold amount, it can be assumed that an error in image analysis has occurred. In this case either the mean value may be used instead, or scale correction can be skipped. In the vase of video, optionally the previous video frame's scale correction can be re-used. The profile may be stored in a memory (not shown) available to a processor (150, FIG. 8).", Paragraph [0075]). Examiner notes the art teaching the limitation with reference to option (iv) a zero normalized cross correlation, wherein Zero Normalized Cross Correlation (ZNCC) specifically adjusts for both differences in mean (zero-mean) and variance (normalized standard deviation) of the signals, which directly aligns with using mean and variance values to normalize the disparities. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of AZABAN, in view of Bickerstaff, to incorporate determining a correlation using ZNCC, as disclosed by Bickerman. The suggestion/motivation for doing so would have been to show that that using ZNCC doesn’t just allow for the images to be aligned by shifting, but to further align the images based in the pixel values for a more precise alignment of the ROI’s from the images being displayed to a user. Claims 1 – 10 and 16 – 20 (as best understood) are rejected under 35 U.S.C. 103 as being unpatentable over (US Publication No. 2025/0373773 A1) to ASABAN et al. (hereinafter ASABAN) in view of (US Publication No. 2011/0285826 A1) to Bickerstaff et al. (hereinafter Bickerstaff) in further view of Non-Patent Literature “A pixel matching process and multiple ROI for stereo images in stereo vision application” to R.A. Hamzah et al. (hereinafter Hamzah). Claim 1 Regarding Claim 1, an independent method claim, ASABAN teaches a method for focusing a stereoscopic imaging system having a left-side image sensor and a right-side image sensor, the left-side image sensor and the right-side image sensor generating a corresponding left-side image and right-side image, the left-side image and the right-side image defining a stereo image (Abstract), the method comprising: providing an optical system including an adjustable focusing optics ("The magnification of these stereoscopic image pairs is set to a desired value, which may be optionally adjusted in accordance with a user-controlled zoom input (block 178).", Paragraph [0220]); and selecting a region of interest (ROI), the ROI being a region within one of the left-side image and the right-side image ("In some embodiments, the processor is configured to estimate a distance from the head-mounted unit to the ROI based on a disparity between the images captured by both the left and right video cameras, and to adjust the stereoscopic image responsively to the disparity.", Paragraph [0035]). AZABAN does not teach the optical system configured to obtain a best focus focal plane for both the left side image and the right side image that corresponds to a zero, or near zero, disparity between the right side image and the left side image in the stereo image; setting a focus of the ROI by adjusting the adjustable focusing optics using, as a metric, one of a disparity, similarity and a high frequency content, wherein the focus of the ROI is set when at least one of the following conditions is satisfied: (1) the disparity between the left side image and the right side image is below a predetermined threshold; (2) the similarity in a pixel intensity between the left side image and the right side image is above a predetermined threshold; (3) the high frequency content in the left side image and the right side image are above a predetermined threshold. However, Bickerstaff teaches the optical system configured to obtain a best focus focal plane for both the left side image and the right side image that corresponds to a zero, or near zero, disparity between the right side image and the left side image in the stereo image ("It will be appreciated that a zoom lens is a lens with an adjustable focal length; that is, the distance between the lens and the focal plane (the image sensor) can be varied, thereby changing the subtended angle of the visible field of view.", Paragraph [0040]; "Referring now to FIGS. 5A to 7B inclusive, an analysis technique is described. FIGS. 5A and 5B show the left and right images from respective optical imaging systems at a minimum degree of zoom available to those optical imaging systems, giving a maximum wide-angle view of a football stadium. In this case, there is no disparity between the fields of view in the left and right images.", Paragraph [0041]); PNG media_image1.png 626 472 media_image1.png Greyscale setting a focus of the ROI by adjusting the adjustable focusing optics using, as a metric, one of a disparity, similarity and a high frequency content, wherein the focus of the ROI is set when at least one of the following conditions is satisfied: (1) the disparity between the left side image and the right side image is below a predetermined threshold ("The adjustment process can use a feedback loop to make further adjustments to one or both optical imaging systems until the disparity in field of view between the optical imaging systems falls below a threshold level.", Paragraph [0067]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of AZABAN to incorporate obtaining the best focal plane and adjusting the left and right optics until a near zero disparity is present between the left and right images as well as ensuring the disparity between the left and right is below a threshold, as disclosed by Bickerstaff. The suggestion/motivation for doing so would have been to potentially allow medical personnel in the surgical field the ability to be completely hands free, reducing the chances of complications while performing surgical procedures. Neither AZABAN, or Bickerstaff, or the combination teach (2) the similarity in a pixel intensity between the left side image and the right side image is above a predetermined threshold; and (3) the high frequency content in the left side image and the right side image are above a predetermined threshold. However, Hamzah teaches (2) the similarity in a pixel intensity between the left side image and the right side image is above a predetermined threshold ("Absolute differences of pixel intensities are used in the algorithm to compute stereo similarities between points. By computing the sum of the absolute differences SAD for pixels in a window surrounding the points, the difference between similarity values for stereo points can be calculated", Section IV: Software Architecture), where there is an algorithm that uses SAD (sum of absolute differences) and it is obvious to one skilled in the art as well as known that these algorithms use thresholds for certain scoring calculations/classifications It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of AZABAN, in view of Bickerstaff, to incorporate the use of sum of absolute differences to calculate a similarity between two images, as disclosed by Hamzah. The suggestion/motivation for doing so would have been to adjust the optics so as to have the most precise image being displayed to a user, where a greater disparity (un-similarity score) could to potential complications. Examiner note: (3) the high frequency content in the left side image and the right side image are above a predetermined threshold. (refer to the 112(b) rejections above for this limitation). Claim 2 Regarding Claim 2, dependent on claim 1, ASABAN, in view of Bickerstaff and Hamzah, teaches the invention as claimed in claim 1. Neither ASABAN¸ or Bickerstaff, or the combination teach wherein the focus of the ROI is set by taking a weighted average of the disparity, similarity and high frequency information. However, Hamzah further teaches (as best understood) wherein the focus of the ROI is set by taking a weighted average of the disparity, similarity and high frequency information ("Section VI: Result Of Selected Region Of Interest In Disparity Mapping And Pixels Intensities). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of AZABAN, in view of Bickerstaff and Hamzah, to incorporate the weighted average of the metrics, as disclosed by Hamzah. The suggestion/motivation for doing so would have been to allow the device to display the ROI through the optics as accurately as possible to prevent user dizziness or eye strain. Claim 3 Regarding Claim 3, dependent on claim 1, ASABAN, in view of Bickerstaff and Hamzah, teaches the invention as claimed in claim 1. AZABAN further teaches wherein the ROI is determined based on a user input at one of the left-side image and the right-side image ("The processor(s) of the HMD 28 may be in communication with one or more input devices, such as a pointing device, a keyboard, a foot pedal, or a mouse, to allow the operator to input data into the system. In some embodiments HMD 28 may include one or more input devices, such as a touch screen or buttons. Alternatively or additionally, users of the system may input instructions to the processor(s) using a gesture-based interface. For this purpose, for example, the depth sensors described herein may sense movements of a hand of the healthcare professional. Different movements of the professional's hand and fingers may be used to invoke specific functions of the one or more displays and of the system.", Paragraph [0120]; "In some embodiments, the head-mounted unit is configured to display and magnify an image, assuming the user's gaze would be typically straightforward. In some embodiments, the angular size or extent of the ROI and/or its location is determined, assuming the user's gaze would be typically straightforward with respect to the user's head posture. In some embodiments, the user's pupils' location, gaze and/or line of sight may be tracked. For example, one or more eye trackers 44 may be integrated into head-mounted unit 28, as shown in FIG. 2, for real-time adjustment and possibly for purposes of calibration. Eye trackers 44 comprise miniature video cameras, possibly integrated with a dedicated infrared light source, which capture images of the eyes of the user (e.g., wearer) of head-mounted unit 28. Processor 45 and/or 52 or a dedicated processor in eye trackers 44 processes the images of the eyes to identify the locations of the user's pupils. Additionally or alternatively, eye trackers 44 may detect the direction of the user's gaze using the pupil locations and/or by sensing the angle of reflection of light from the user's corneas."…"In some embodiments, processor 45 and/or processor 52 uses the information provided by eye trackers 44 with regard to the pupil locations in generating an image or a magnified image for presentation on displays 30. For example, the processor 45, 52 may dynamically determine a crop region or an image region on each sensor of each camera to match the user's gaze direction. The location of a sensor image region may be changed, e.g., horizontally changed, in response to a user's gaze current direction. The detection of the user's gaze direction may be used for determining a current ROI to be imaged. According to some embodiments, the image generated based on the part or region of the sensor corresponding to the shifted or relocated crop or image region or ROI 24 may be magnified and output for display.", Paragraph(s) [0134 - 0135]). Claim 4 Regarding Claim 4, dependent on claim 1, ASABAN, in view of Bickerstaff and Hamzah, teaches the invention as claimed in claim 1. AZABAN further teaches wherein the ROI is determined based on a user input at the stereo image (Rejected as applied to claim 3), wherein the user input would be that of the user's gaze, pupils' location and/or line of sight that is tracked, as disclosed in paragraph [0134]. Claim 5 Regarding Claim 5, dependent on claim 4, ASABAN, in view of Bickerstaff and Hamzah, teaches the invention as claimed in claim 4. AZABAN further teaches wherein, based on the user input at the stereo image, a graphic overlay representative of the ROI is displayed at both the left-side image and the right-side image (Figure 2; " FIG. 9A illustrates schematically one display 30 of a head-mounted display unit (such as head-mounted display unit 28 of FIG. 2) and shows a magnified image 37 in first portion 33 of the display 30 and reality 39 somewhat visible through second portion 35 of the display 30. FIG. 9A is similar to FIG. 4, discussed above, and the same or similar reference numbers are used to refer to the same or similar components. One difference from FIG. 4, however, is that in FIG. 9A, the visibility of reality 39 through the display 30 has been reduced. Specifically, the display 30 has been made darker and/or more opaque than in FIG. 4. In a case where the magnified image 37 is projected onto the display 30 (such as via micro-projector 31 of FIG. 2) this can result in the magnified image 37 being significantly brighter than the image of reality 39 seen through the display 30. By changing the relative brightness of the magnified image 37 versus reality 39, specifically by increasing the relative brightness of magnified image 37 versus reality 39, this can result in a reduction of confusion and/or a more optimal magnified image.", Paragraph [0223]). PNG media_image2.png 517 715 media_image2.png Greyscale Claim 6 Regarding Claim 6, dependent on claim 1, ASABAN, in view of Bickerstaff and Hamzah, teaches the invention as claimed in claim 1. AZABAN does not teach wherein the disparity is determined by a distance between portions of the left-side image and the right-side image having the same or similar pixel values. However, Bickerstaff further teaches wherein the disparity is determined by a distance between portions of the left-side image and the right-side image having the same or similar pixel values (Figures 6A and 6B; "In either case (vertical or horizontal), a comparison of features at the edges of the images can be used to indicate the amount by which the field of view differs between the images (i.e. a disparity value). Thus the feature 420 circled in the bottom left corner of FIG. 6B is (for example) vertically displaced from the corresponding feature in FIG. 6A by 3% of the vertical extent of the image. Similarly, the feature 410 circled at the right of FIG. 6B is (again for example) horizontally displaced from the corresponding feature of FIG. 6A by 3% of the horizontal extent of the image. Consequently the 3D camera can estimate the to disparity in the respective fields of view. Such a disparity value can be expressed as a percentage difference in extent of view in the image (i.e. 3%) based upon the disparity in fields of view, or similarly a ratio between left and right images, or in terms of a number of pixels, or can be expressed in terms of angular field of view itself (for example by multiplying d by 1 and 1/1.03 for the respective optical imaging systems in equation 1)."…"Alternatively or in addition, vertical and/or horizontal disparity can be similarly compared anywhere within the pair of captured images by detecting an image feature (for example a distinct colour region) and comparing its relative horizontal and/or vertical extent between the left and right images. However it will be appreciated that the accuracy of such a comparison is limited by the number of pixels that the image feature occupies; hence use of substantially the whole image as a basis of comparison provides a more sensitive measure of relative field of view.", Paragraph(s) [0050 -0051]). PNG media_image3.png 198 489 media_image3.png Greyscale Claim 7 Regarding Claim 7, dependent on claim 1, ASABAN, in view of Bickerstaff and Hamzah, teaches the invention as claimed in claim 1. Neither ASABAN¸ or Bickerstaff, or the combination teach wherein the similarity is determined based on a difference in pixel values for corresponding portions of the left-side image and the right-side image. However, Hamzah further teaches wherein the similarity is determined based on a difference in pixel values for corresponding portions of the left-side image and the right-side image ("The matching process is to determine the difference of intensities of pixel between stereo images while the region of interest ROI works as a reference area to the stereo vision application.", Abstract). Claim 8 Regarding Claim 8, dependent on claim 1, ASABAN, in view of Bickerstaff and Hamzah, teaches the invention as claimed in claim 1. Neither ASABAN¸ or Bickerstaff, or the combination teach wherein the similarity is determined based on a difference in pixel values for corresponding portions of the left-side image and the right-side image. However, Hamzah further teaches wherein the similarity is determined based on at least one of a sum of squared differences and a sum of absolute differences of pixel intensity values of the right image relative to the left image within the ROI ("Absolute differences of pixel intensities are used in the algorithm to compute stereo similarities between points. By computing the sum of the absolute differences SAD for pixels in a window surrounding the points, the difference between similarity values for stereo points can be calculated", Section IV: Software Architecture). Claim 9 Regarding Claim 9, dependent on claim 1, ASABAN, in view of Bickerstaff and Hamzah, teaches the invention as claimed in claim 1. AZABAN further teaches wherein the similarity is determined based at least in part on a correlation between the left-side image and the right-side image within the ROI ("According to some embodiments, the stereoscopic tuning may be performed by shifting, relocating, or altering the image sensors' image region to provide a substantially similar or identical images, at least with respect to a determined ROI image (a plane of substantially zero parallax) and to facilitate full overlap between the left and right images.", Paragraph [0219]). Claim 10 Regarding Claim 10, dependent on claim 9, ASABAN, in view of Bickerstaff and Hamzah, teaches the invention as claimed in claim 9. AZABAN does not teach wherein the correlation is determined based on at least one of (i) a normalized correlation, (ii) a cross correlation, (iii) a normalized cross correlation, and (iv) a zero normalized cross correlation. However, Bickerstaff further teaches wherein the correlation is determined based on at least one of (i) a normalized correlation, (ii) a cross correlation, (iii) a normalized cross correlation, and (iv) a zero normalized cross correlation ("Similarly, for both still and video imaging, a disparity profile may be compiled to predict the scale disparity between images for each available level of zoom. The profile may contain mean and variance values for the disparities; where a detected disparity exceeds the variance by a threshold amount, it can be assumed that an error in image analysis has occurred. In this case either the mean value may be used instead, or scale correction can be skipped. In the vase of video, optionally the previous video frame's scale correction can be re-used. The profile may be stored in a memory (not shown) available to a processor (150, FIG. 8).", Paragraph [0075]). Examiner notes the art teaching the limitation with reference to option (iv) a zero normalized cross correlation, wherein Zero Normalized Cross Correlation (ZNCC) specifically adjusts for both differences in mean (zero-mean) and variance (normalized standard deviation) of the signals, which directly aligns with using mean and variance values to normalize the disparities. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of AZABAN, in view of Bickerstaff and Hamzah, to incorporate determining a correlation using ZNCC, as disclosed by Bickerman. The suggestion/motivation for doing so would have been to show that that using ZNCC doesn’t just allow for the images to be aligned by shifting, but to further align the images based in the pixel values for a more precise alignment of the ROI’s from the images being displayed to a user. Claim 16, an independent system claim, is rejected for the same reasons as applied to claim 1. Claims 17 and 19 – 20 are rejected for the same reasons as applied to the above claims. Claim 18, dependent on claim 16, is rejected for the same reasons as applied to claim 8, where even though claim 8 uses "similarity" in the claim language, which differs from "disparity" used in the present claim language…a lower SAD value indicates high similarity (closer match), while a higher value indicates low similarity (i.e disparity, which is defined as calculating the "dissimilarity between two images"). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ronde Miller whose telephone number is (703) 756-5686 The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RONDE LEE MILLER/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Apr 16, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573215
LEARNING APPARATUS, LEARNING METHOD, OBJECT DETECTION APPARATUS, OBJECT DETECTION METHOD, LEARNING SUPPORT SYSTEM AND LEARNING SUPPORT METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12548114
METHOD FOR CODE-LEVEL SUPER RESOLUTION AND METHOD FOR TRAINING SUPER RESOLUTION MODEL THEREFOR
2y 5m to grant Granted Feb 10, 2026
Patent 12524833
X-RAY DIAGNOSIS APPARATUS, MEDICAL IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12502905
SECURE DOCUMENT AUTHENTICATION
2y 5m to grant Granted Dec 23, 2025
Patent 12505581
ONLINE TRAINING COMPUTER VISION TASK MODELS IN COMPRESSION DOMAIN
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+37.5%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month