Prosecution Insights
Last updated: April 19, 2026
Application No. 18/787,898

Imaging System and Method for Identifying a Boundary Between Active and Inactive Portions of a Digital Image and Applying Settings in Response

Final Rejection §103§112§DP
Filed
Jul 29, 2024
Examiner
BOYLAN, JAMES T
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Karl Storz Imaging Inc.
OA Round
2 (Final)
63%
Grant Probability
Moderate
3-4
OA Rounds
2y 9m
To Grant
74%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
305 granted / 487 resolved
+4.6% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
34 currently pending
Career history
521
Total Applications
across all art units

Statute-Specific Performance

§101
1.8%
-38.2% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 487 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Response to Arguments Applicant has not addressed the double patenting rejections in the remarks dated 11/26/2025. Each and every rejection must be addressed in the office action. The examiner will review this rejection when applicant responds to it. Applicant's arguments filed 11/26/2025 have been fully considered but they are not persuasive. Applicant has re-worded the limitations for claim 8 (and newly added claim 21) which the 112 rejection still applies. The examiner has provided more clarification under the 112 rejection below in the office action to further assist the applicant. In regards to applicant’s arguments directed to claim 1, Pang discloses the image size is in an indication of the type of scope that is attached to the camera and is used to infer physical characteristics of the scope (such as scope diameter) that is attached to the camera, and to select settings for various image processing which are most appropriate for that scope. Additionally, Uemori discloses determining an image boundary using Hough transform. The image boundary is analogous to an endoscopic image size. Therefore, it would be obvious to modify Pang to perform a simple substitution of image boundary/size determination. Additionally, Bodor discloses controlling a tunable light source using camera feedback to adjust the intensity of light for an endoscopic system. Therefore, it would be obvious to modify Pang to use camera feedback (i.e. image size) to adjust a light source such as to improve upon image quality. In regards to claim 4, Aimling discloses an endoscope with optical zoom adjustable between various magnifications including the claimed partial and full. Aimling also discloses calculating image size which accounts for zoom factors, various endoscopes used, and imaging conditions, the size of an image corresponds to zoom factor, the endoscopic image size is typically determined by the diameter of the rod lens system in the endoscope and the magnification of the optics, and automatically adjusting the optical zoom so as to optimize the medical image. Additionally, Uemori discloses utilizing a hough transform technique to generate image boundary data. Therefore, it would be obvious to modify Aimling to perform a simple substitution of image boundary/size determination. Applicant’s arguments with respect to claims 8, 13-14 and 21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument (i.e. introduction of newly added reference Chen). Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-17 of U.S. Patent No. 12,081,858. Although the claims at issue are not identical, they are not patentably distinct from each other because this instant application contains broader claim limitations than the patent listed above. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-25 of U.S. Patent No. 12,081,858. Although the claims at issue are not identical, they are not patentably distinct from each other because this instant application contains broader claim limitations than the patent listed above. Claim Objections Claim 4 is objected to because of the following informalities: Please correct the spelling of “the light senor” in regards to “a camera having a light sensor and an optical zoom device, the light senor. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 8, 13-14 and 21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 8 states the limitation “the set of optical model parameters for the modulation transfer function (MTF), spatial frequency response (SFR), lens distortion, vignetting, and/or chromatic aberration of the imaging scope and process the selected set of optical model parameters to automatically adjust at least one of a sharpness of the digital image; a lens distortion correction of the digital image; vignetting correction of the digital image; and a chromatic aberration correction of the digital image.” The claim is unclear because of the “or” in the limitation underlined above. The claim defines “a set of optical model parameters” but the claim does not clearly define what this set is. For example, the claim states “the set of optical model parameters for the modulation transfer function (MTF), spatial frequency response (SFR), lens distortion, vignetting, and/or chromatic aberration of the imaging scope”. Therefore, this partial limitation states that the set of optical model parameters correspond to just one of the terms listed due to the “or” term in the claim. Please clarify. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Pang et al. (herein after will be referred to as Pang) (US Patent No. 8,373,748) in view of Uemori (US 20150243033) and in further view of Bodor et al. (herein after will be referred to as Bodor) (US 20170086657). Regarding claim 1, Pang discloses an imaging system, comprising: a light source configured to illuminate an object with an illumination light; [See Pang [Col. 4 lines 30-31] Light source unit.] an imaging scope configured to capture light reflected, scattered, or emitted from the object; [See Pang [Col. 4 line 30] Scope.] a camera having a light sensor with a light-sensitive surface configured to receive the captured light from the imaging scope, and generate a digital image representative of the captured light; [See Pang [Col. 4 line 36] Camera.] an image processor configured to receive the digital image from the camera, and [See Pang [Fig. 3] Processor (30) to receive image data.] a system controller configured to receive the boundary data from the image processor and use the boundary data to determine a diameter of the imaging scope and [See Pang [Col. 2 last para. to Col. 3 1st para.] The image size is an indication of the type of scope that is attached to the camera and is used to infer physical characteristics of the scope that is attached to the camera, and to select settings for various image processing which are most appropriate for that scope. Also, see Col. 5 lines 26-44] Endoscopes are designed with various different physical and functional characteristics (length, diameter, type of optics, magnification, materials, degree of flexibility, etc.).] Pang does not explicitly disclose use at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to (i) identify a boundary between an active portion and an inactive portion of the digital image and (ii) generate boundary data indicative of a characteristic of the boundary; and to process the determined diameter to actuate the light source to set a light source intensity associated with the determined diameter. However, Uemori does disclose use at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to (i) identify a boundary between an active portion and an inactive portion of the digital image and (ii) generate boundary data indicative of a characteristic of the boundary; and [See Uemori [Fig. 1] Mask detection unit (121). Also, see 0056 and Fig. 2, This unit detects the shape of the boundary and radius of the circle. Also, see 0057, Hough transform is utilized for mask detection.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Pang to add the teachings of Uemori, in order to perform a simple substitution of algorithms for image boundary/size detection. Pang (modified by Uemori) do not explicitly disclose to process the determined diameter to actuate the light source to set a light source intensity associated with the determined diameter. However, Bodor does disclose to process the determined diameter to actuate the light source to set a light source intensity associated with the determined diameter. [See Bodor [0012] Tunable light source controlled by the feedback of the video camera to adjust the intensity of light for an endoscopic system.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Pang (modified by Uemori) to add the teachings of Bodor, in order to provide an alternative way of adjusting the intensity of light on the image sensor via a tunable light source [See Bodor [0012]]. Regarding claim 15, see examiners rejection for claim 1 which is analogous and applicable for the rejection of claim 15. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Pang (US Patent No. 8,373,748) in view of Uemori (US 20150243033) in view of Burnett (US Patent No. 7,136,098) and in further view of Chen et al. (herein after will be referred to as Chen) (US Patent No. 8,368,762). Regarding claim 8, Pang discloses an imaging system, comprising: a light source configured to illuminate an object with an illumination light; [See Pang [Col. 4 lines 30-31] Light source unit.] an imaging scope configured to capture light reflected, scattered, or emitted from the object; [See Pang [Col. 4 line 30] Scope.] a camera having a light sensor with a light-sensitive surface configured to receive the captured light from the imaging scope, and generate a digital image representative of the captured light; [See Pang [Col. 4 line 36] Camera.] an image processor configured to receive the digital image from the camera, and [See Pang [Fig. 3] Processor (30) to receive image data.] configured to process the received boundary data to determine a class of the imaging scope, [See Pang [Col. 2 last para. to Col. 3 1st para.] The image size is an indication of the type of scope that is attached to the camera and is used to infer physical characteristics of the scope that is attached to the camera, and to select settings for various image processing which are most appropriate for that scope. Also, see Col. 5 lines 26-44, Endoscopes are designed with various different physical and functional characteristics (length, diameter, type of optics, magnification, materials, degree of flexibility, etc.). Parameters who settings vary include video gain levels, enhancement level, camera shutter speed, gamma level, and others.] wherein the system controller further includes a non-volatile memory storing a set of optical model parameters corresponding the class of the image scope, [See Pang [Fig. 3] Non-vol. Memory (36). Also, see (27), this characteristic of the image can be used to look up, in a data structure such as a lookup table, the type of scope being used and to look up and select appropriate values for various parameters used by the CCU 4 for the processing or display of images.] configured to use the received boundary data to select optical model parameters for the [See Pang [Col. 2 last para. to Col. 3 1st para.] The image size is an indication of the type of scope that is attached to the camera and is used to infer physical characteristics of the scope that is attached to the camera, and to select settings for various image processing which are most appropriate for that scope. Also, see Col. 5 lines 26-44] Endoscopes are designed with various different physical and functional characteristics (length, diameter, type of optics, magnification, materials, degree of flexibility, etc.). Parameters who settings vary include video gain levels, enhancement level, camera shutter speed, gamma level, and others.] Pang does not explicitly disclose an image processor configured to receive the digital image from the camera, and use at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to (i) identify a boundary between an active portion and an inactive portion of the digital image and (ii) generate boundary data indicative of a characteristic of the boundary; and a system controller configured to receive the boundary data from the image processor and the set of optical model parameters for the modulation transfer function (MTF), spatial frequency response (SFR), lens distortion, vignetting, and/or chromatic aberration of the imaging scope and process the selected set of optical model parameters to automatically adjust at least one of a sharpness of the digital image; a lens distortion correction of the digital image; vignetting correction of the digital image; and a chromatic aberration correction of the digital image. However, Umeori does disclose use at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to (i) identify a boundary between an active portion and an inactive portion of the digital image and (ii) generate boundary data indicative of a characteristic of the boundary; and [See Uemori [Fig. 1] Mask detection unit (121). Also, see 0056 and Fig. 2, This unit detects the shape of the boundary and radius of the circle. Also, see 0057, Hough transform is utilized for mask detection.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Pang to add the teachings of Uemori, in order to perform a simple substitution of algorithms for image boundary/size detection. Pang (modified by Uemori) do not explicitly disclose a system controller configured to receive the boundary data from the image processor and the set of optical model parameters for the modulation transfer function (MTF), spatial frequency response (SFR), lens distortion, vignetting, and/or chromatic aberration of the imaging scope and process the selected set of optical model parameters to automatically adjust at least one of a sharpness of the digital image; a lens distortion correction of the digital image; vignetting correction of the digital image; and a chromatic aberration correction of the digital image. However, Burnett does disclose a system controller configured to receive the boundary data from the image processor and [See Burnett [Fig. 2] Processor in communication with Image size detection circuit.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Pang (modified by Uemori) to add the teachings of Burnett, in order to improve upon image quality [See Burnett [Col. 2 1st para.]]. Pang (modified by Uemori and Burnett) do not explicitly disclose the set of optical model parameters for the modulation transfer function (MTF), spatial frequency response (SFR), lens distortion, vignetting, and/or chromatic aberration of the imaging scope and process the selected set of optical model parameters to automatically adjust at least one of a sharpness of the digital image; a lens distortion correction of the digital image; vignetting correction of the digital image; and a chromatic aberration correction of the digital image. However, Chen does disclose the set of optical model parameters for the modulation transfer function (MTF), spatial frequency response (SFR), lens distortion, vignetting, and/or chromatic aberration of the imaging scope and process the selected set of optical model parameters to automatically adjust at least one of a sharpness of the digital image; a lens distortion correction of the digital image; vignetting correction of the digital image; and a chromatic aberration correction of the digital image. [See Chen [Col. 2 lines 57-65] Determining model for a photographic system which is used to apply corrections to lens distortions, chromatic aberration, and vignetting.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Pang (modified by Uemori and Burnett) to add the teachings of Chen, in order to incorporate the obvious camera correction techniques (such as lens distortion, vignetting, etc.) within the other parameters not explicitly described by Pang. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Uemori (US 20150243033) in further view of Amling (US Patent No. 9,060,674). Regarding claim 4, Aimling discloses an imaging system, comprising: a light source configured to illuminate an object with an illumination light; [See Aimling [Fig. 8] Light source (850).] an imaging scope configured to capture light reflected, scattered, or emitted from the object; [See Aimling [Abstract] Endoscope.] a camera having a light sensor and an optical zoom device, the optical zoom device configured to receive the captured light from the imaging scope before the capture light is received by the light sensor; [See Aimling [Col. 1 lines 25-30 and Col. 1 lines 60-65] optical zoom for camera systems and imagers with endoscopes.] wherein optical zoom device is selectively adjustable between a low magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies only a portion of the light-sensitive surface when received thereon, and a high magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies all of the light-sensitive surface of the light sensor when received thereon, and [See Amling [Col. 1 lines 25-30 and Col. 1 lines 60-65] Optical zoom for endoscopes. Also, see Figs. 1-4, Various magnifications of the endoscopic image including partial and full.] wherein the system controller is configured to process the boundary data to adjust the optical zoom device. [See Amling [Col. 1 lines 25-30 and Col. 1 lines 60-65] Endoscopes having various diameters are provided with optical zoom. Also, see Fig. 8, camera control unit (860). Also, see [Col. 6 lines 37-50] and/or [Col. 3 lines 5-7], Calculate the size of the image which accounts for zoom factors, various endoscopes used, and imaging conditions. Also, see Col. 3 lines 26-28, the size of an image corresponds to zoom factor associated with image. Also, see Col. 1 lines 18-23, the endoscopic image size is typically determined by the diameter of the rod lens system in the endoscope and the magnification of the optics. Also, see Col. 1 lines 54-55, automatically adjusting the optical zoom so as to optimize the medical image.] Aimling does not explicitly disclose an image processor configured to receive the digital image from the camera, and use at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to (i) identify a boundary between an active portion and an inactive portion of the digital image and (ii) generate boundary data indicative of a characteristic of the boundary, and However, Uemori does disclose an image processor configured to receive the digital image from the camera, and use at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to (i) identify a boundary between an active portion and an inactive portion of the digital image and (ii) generate boundary data indicative of a characteristic of the boundary, and [See Uemori [Fig. 1] Mask detection unit (121). Also, see 0056 and Fig. 2, This unit detects the shape of the boundary and radius of the circle. Also, see 0057, Hough transform is utilized for mask detection.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Aimling to add the teachings of Uemori, in order to perform a simple substitution of algorithms for image boundary/size detection. Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Pang (US Patent No. 8,373,748) in view of Uemori (US 20150243033) in view of Bodor (US 20170086657) and in further view of Amling (US Patent No. 9,060,674). Regarding claim 6, Pang (modified by Uemori and Bodor) disclose the system of claim 1. Furthermore, Pang does not explicitly disclose wherein the camera comprises an optical zoom device configured to receive the captured light from the imaging scope before the captured light is received by the light sensor; wherein the optical zoom device is selectively adjustable between a low magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies only a portion of the light-sensitive surface when received thereon, and a high magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies all of the light-sensitive surface of the light sensor when received thereon; and wherein the adjustment settings for the low magnification configuration and the high magnification configuration are determined by the system controller based on the determined scope diameter. However, Amling does disclose wherein the camera comprises an optical zoom device configured to receive the captured light from the imaging scope before the captured light is received by the light sensor; wherein the optical zoom device is selectively adjustable between a low magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies only a portion of the light-sensitive surface when received thereon, and a high magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies all of the light-sensitive surface of the light sensor when received thereon; and [See Amling [Col. 1 lines 25-30 and Col. 1 lines 60-65] Optical zoom for endoscopes. Also, see Figs. 1-4, Various magnifications of the endoscopic image including partial and full.] wherein the adjustment settings for the low magnification configuration and the high magnification configuration are determined by the system controller based on the determined scope diameter. [See Amling [Col. 1 lines 25-30 and Col. 1 lines 60-65] Endoscopes having various diameters are provided with optical zoom. Also, see Fig. 8, camera control unit (860). Also, see [Col. 6 lines 37-50] and/or [Col. 3 lines 5-7], Calculate the size of the image which accounts for zoom factors, various endoscopes used, and imaging conditions. Also, see Col. 3 lines 26-28, the size of an image corresponds to zoom factor associated with image. Also, see Col. 1 lines 18-23, the endoscopic image size is typically determined by the diameter of the rod lens system in the endoscope and the magnification of the optics. Also, see Col. 1 lines 54-55, automatically adjusting the optical zoom so as to optimize the medical image.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Pang (modified by Uemori and Bodor) to add the teachings of Amling, in order to automatically adjust the optical and/or digital zoom so as to optimize the image [See Amling [Col. 2 lines 13-16]]. Regarding claim 7, Pang (modified by Uemori, Bodor and Amling) disclose the system of claim 6. Furthermore, Pang does not explicitly disclose wherein the system controller is configured to process the boundary data to adjust the optical zoom device. However, Amling does disclose wherein the system controller is configured to process the boundary data to adjust the optical zoom device. [See Amling [Col. 1 lines 25-30 and Col. 1 lines 60-65] Endoscopes having various diameters are provided with optical zoom. Also, see Fig. 8, camera control unit (860). Also, see [Col. 6 lines 37-50] and/or [Col. 3 lines 5-7], Calculate the size of the image which accounts for zoom factors, various endoscopes used, and imaging conditions. Also, see Col. 3 lines 26-28, the size of an image corresponds to zoom factor associated with image. Also, see Col. 1 lines 18-23, the endoscopic image size is typically determined by the diameter of the rod lens system in the endoscope and the magnification of the optics. Also, see Col. 1 lines 54-55, automatically adjusting the optical zoom so as to optimize the medical image.] Applying the same motivation as applied in claim 6. Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Pang (US Patent No. 8,373,748) in view of Uemori (US 20150243033) in view of Burnett (US Patent No. 7,136,098) in view of Chen (US Patent No. 8,368,762) and further view of Amling (US Patent No. 9,060,674). Regarding claim 13, Pang (modified by Uemori, Burnett, and Chen) disclose the system of claim 8. Furthermore, Pang does not explicitly disclose wherein the camera includes an optical zoom device configured to receive the captured light from the imaging scope before the captured light is received by the light sensor; and wherein the optical zoom device is selectively adjustable between a low magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies only a portion of the light-sensitive surface when received thereon, and a high magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies all of the light-sensitive surface of the light sensor when received thereon. However, Amling does disclose wherein the camera includes an optical zoom device configured to receive the captured light from the imaging scope before the captured light is received by the light sensor; and [See Amling [Col. 1 lines 25-30 and Col. 1 lines 60-65] Optical zoom for endoscopes.] wherein the optical zoom device is selectively adjustable between a low magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies only a portion of the light-sensitive surface when received thereon, and a high magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies all of the light-sensitive surface of the light sensor when received thereon. [See Amling [Figs. 1-4] Various magnifications of image including partial and full.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Pang (modified by Uemori, Burnett and Chen) to add the teachings of Amling, in order to automatically adjust the optical and/or digital zoom so as to optimize the image [See Amling [Col. 2 lines 13-16]]. Regarding claim 14, Pang (modified by Uemori, Burnett, Chen and Amling) disclose the system of claim 13. Furthermore, Pang does not explicitly disclose wherein the system controller is configured to use the boundary data to adjust the optical zoom device. However, Amling does disclose wherein the system controller is configured to use the boundary data to adjust the optical zoom device. [See Amling [Col. 1 lines 25-30 and Col. 1 lines 60-65] Endoscopes having various diameters are provided with optical zoom. Also, see Fig. 8, camera control unit (860). Also, see [Col. 6 lines 37-50] and/or [Col. 3 lines 5-7], Calculate the size of the image which accounts for zoom factors, various endoscopes used, and imaging conditions. Also, see Col. 3 lines 26-28, the size of an image corresponds to zoom factor associated with image. Also, see Col. 1 lines 18-23, the endoscopic image size is typically determined by the diameter of the rod lens system in the endoscope and the magnification of the optics. Also, see Col. 1 lines 54-55, automatically adjusting the optical zoom so as to optimize the medical image.] Applying the same motivation as applied in claim 8. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Pang (US Patent No. 8,373,748) in view of Uemori (US 20150243033) and in further view of Amling (US Patent No. 9,060,674). Regarding claim 18, Pang discloses a method, comprising: receiving, by an image processor, a digital image generated by an imaging system comprising a light source configured to illuminate an object with light, and imaging scope to capture light reflected, scattered, or emitted from the object, and a camera having a light sensor and a light-sensitive surface configured to receive the captured light from the imaging scope, and generate a digital image representative of the captured light, [See Pang [Fig. 3] Processor (30) to receive image data. Also, see Col. 4 lines 30-31, light source unit. Also, see Col. 4 line 30, scope. Also, see Col. 4 line 36, camera.] determining a diameter of the imaging scope based on the boundary data; and [See Pang [Col. 2 last para. to Col. 3 1st para.] The image size is an indication of the type of scope that is attached to the camera and is used to infer physical characteristics of the scope that is attached to the camera, and to select settings for various image processing which are most appropriate for that scope. Also, see Col. 5 lines 26-44] Endoscopes are designed with various different physical and functional characteristics (length, diameter, type of optics, magnification, materials, degree of flexibility, etc.).] Pang does not explicitly disclose using, by the image processor, at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to identify a boundary between an active portion and an inactive portion of the digital image; generating, by the image processor, boundary data indicative of a characteristic of the boundary; and the camera further including an optical zoom device configured to receive the captured light from the imaging scope before the captured light is received by the light sensor, and wherein the optical zoom device is selectively adjustable between a low magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies only a portion of the light-sensitive surface when received thereon, and a high magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies all of the light-sensitive surface of the light sensor when received thereon; wherein the method further comprises the step of adjusting the optical zoom device based on the determined diameter. However, Uemori does disclose using, by the image processor, at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to identify a boundary between an active portion and an inactive portion of the digital image; generating, by the image processor, boundary data indicative of a characteristic of the boundary; and [See Uemori [Fig. 1] Mask detection unit (121). Also, see 0056 and Fig. 2, This unit detects the shape of the boundary and radius of the circle. Also, see 0057, Hough transform is utilized for mask detection.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Pang to add the teachings of Uemori, in order to perform a simple substitution of algorithms for image boundary/size detection. Pang (modified by Uemori) do not explicitly disclose Uemori does not explicitly disclose the camera further including an optical zoom device configured to receive the captured light from the imaging scope before the captured light is received by the light sensor, and wherein the optical zoom device is selectively adjustable between a low magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies only a portion of the light-sensitive surface when received thereon, and a high magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies all of the light-sensitive surface of the light sensor when received thereon; wherein the method further comprises the step of adjusting the optical zoom device based on the determined diameter. However, Aimling does disclose the camera further including an optical zoom device configured to receive the captured light from the imaging scope before the captured light is received by the light sensor, and wherein the optical zoom device is selectively adjustable between a low magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies only a portion of the light-sensitive surface when received thereon, and a high magnification configuration, in which the optical zoom device magnifies the captured light such that the captured light occupies all of the light-sensitive surface of the light sensor when received thereon; [See Amling [Col. 1 lines 25-30 and Col. 1 lines 60-65] Optical zoom for endoscopes with camera. Also, see Figs. 1-4] Various magnifications of the endoscopic image including partial and full.] wherein the method further comprises the step of adjusting the optical zoom device based on the determined diameter. [See Amling [Col. 1 lines 25-30 and Col. 1 lines 60-65] Endoscopes having various diameters are provided with optical zoom. Also, see Fig. 8, camera control unit (860). Also, see [Col. 6 lines 37-50] and/or [Col. 3 lines 5-7], Calculate the size of the image which accounts for zoom factors, various endoscopes used, and imaging conditions. Also, see Col. 3 lines 26-28, the size of an image corresponds to zoom factor associated with image. Also, see Col. 1 lines 18-23, the endoscopic image size is typically determined by the diameter of the rod lens system in the endoscope and the magnification of the optics. Also, see Col. 1 lines 54-55, automatically adjusting the optical zoom so as to optimize the medical image.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Pang (modified by Uemori) to add the teachings of Amling, in order to automatically adjust the optical and/or digital zoom so as to optimize the image [See Amling [Col. 2 lines 13-16]]. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Pang (US Patent No. 8,373,748) in view of Uemori (US 20150243033) and in further view of Chen et al. (herein after will be referred to as Chen) (US Patent No. 8,368,762). Regarding claim 21, Pang discloses an imaging system, comprising: receiving by an image processor, a digital image generated by an imaging system comprising a light source configured to illuminate an object with light, an imaging scope to capture light reflected, scattered, or emitted from the object, and a camera having a light sensor with a light-sensitive surface configured to receive the captured light from the imaging scope, and generate a digital image representative of the captured light; [See Pang [Fig. 3] Processor (30) to receive image data. Also, see Col. 4 lines 30-31, light source unit. Also, see Col. 4 line 30, scope. Also, see Col. 4 line 36, camera.] determining a class of the imaging scope by processing boundary data; [See Pang [Col. 2 last para. to Col. 3 1st para.] The image size is an indication of the type of scope that is attached to the camera and is used to infer physical characteristics of the scope that is attached to the camera, and to select settings for various image processing which are most appropriate for that scope. Also, see Col. 5 lines 26-44, Endoscopes are designed with various different physical and functional characteristics (length, diameter, type of optics, magnification, materials, degree of flexibility, etc.). Parameters who settings vary include video gain levels, enhancement level, camera shutter speed, gamma level, and others.] storing, on a non-volatile memory, a set of optical model parameters corresponding to the class of the imaging scope, [See Pang [Fig. 3] Non-vol. Memory (36). Also, see (27), this characteristic of the image can be used to look up, in a data structure such as a lookup table, the type of scope being used and to look up and select appropriate values for various parameters used by the CCU 4 for the processing or display of images.] selecting a set of optical model parameters associated with the determined class of the imaging scope; and [See Pang [Col. 2 last para. to Col. 3 1st para.] The image size is an indication of the type of scope that is attached to the camera and is used to infer physical characteristics of the scope that is attached to the camera, and to select settings for various image processing which are most appropriate for that scope. Also, see Col. 5 lines 26-44] Endoscopes are designed with various different physical and functional characteristics (length, diameter, type of optics, magnification, materials, degree of flexibility, etc.). Parameters who settings vary include video gain levels, enhancement level, camera shutter speed, gamma level, and others.] automatically adjusting, based on the selected optical model parameters, [See Pang [Col. 2 last para. to Col. 3 1st para.] The image size is an indication of the type of scope that is attached to the camera and is used to infer physical characteristics of the scope that is attached to the camera, and to select settings for various image processing which are most appropriate for that scope. Also, see Col. 5 lines 26-44] Endoscopes are designed with various different physical and functional characteristics (length, diameter, type of optics, magnification, materials, degree of flexibility, etc.). Parameters who settings vary include video gain levels, enhancement level, camera shutter speed, gamma level, and others.] Pang does not explicitly disclose using by the image processor, at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to identify a boundary between an active portion and an inactive portion of the digital image; generating, by the image processor, boundary data indicative of a characteristic of the boundary; the set of optical model parameters for the modulation transfer function (MTF), spatial frequency response (SFR), lens distortion, vignetting, and/or chromatic aberration of the imaging scope; at least one of a sharpness of the digital image; a lens distortion correction of the digital image; vignetting correction of the digital image; and a chromatic aberration correction of the digital image. However, Uemori does disclose using by the image processor, at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to identify a boundary between an active portion and an inactive portion of the digital image; generating, by the image processor, boundary data indicative of a characteristic of the boundary; [See Uemori [Fig. 1] Mask detection unit (121). Also, see 0056 and Fig. 2, This unit detects the shape of the boundary and radius of the circle. Also, see 0057, Hough transform is utilized for mask detection.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Pang to add the teachings of Uemori, in order to perform a simple substitution of algorithms for image boundary/size detection. Pang (modified by Uemori) do not explicitly disclose the set of optical model parameters for the modulation transfer function (MTF), spatial frequency response (SFR), lens distortion, vignetting, and/or chromatic aberration of the imaging scope; at least one of a sharpness of the digital image; a lens distortion correction of the digital image; vignetting correction of the digital image; and a chromatic aberration correction of the digital image. However, Chen does disclose the set of optical model parameters for the modulation transfer function (MTF), spatial frequency response (SFR), lens distortion, vignetting, and/or chromatic aberration of the imaging scope; [See Chen (11) Determining model for a photographic system which is used to apply corrections to lens distortions, chromatic aberration, and vignetting.] at least one of a sharpness of the digital image; a lens distortion correction of the digital image; vignetting correction of the digital image; and a chromatic aberration correction of the digital image. [See Chen (11) Determining model for a photographic system which is used to apply corrections to lens distortions, chromatic aberration, and vignetting.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the system by Pang (modified by Uemori) to add the teachings of Chen, in order to incorporate the obvious camera correction techniques (such as lens distortion, vignetting, etc.) within the other parameters not explicitly described by Pang. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES T BOYLAN whose telephone number is (571)272-8242. The examiner can normally be reached Monday-Friday 7am-3pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE ATALA can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES T BOYLAN/Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Jul 29, 2024
Application Filed
Feb 03, 2025
Examiner Interview (Telephonic)
Feb 03, 2025
Examiner Interview Summary
Aug 27, 2025
Non-Final Rejection — §103, §112, §DP
Nov 06, 2025
Interview Requested
Nov 21, 2025
Applicant Interview (Telephonic)
Nov 21, 2025
Examiner Interview Summary
Nov 26, 2025
Response Filed
Feb 17, 2026
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598400
LIGHT FIELD MICROSCOPE-BASED IMAGE ACQUISITION METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12587635
AFFINE MERGE MODE WITH TRANSLATIONAL MOTION VECTORS
2y 5m to grant Granted Mar 24, 2026
Patent 12587752
TENSORIAL TOMOGRAPHIC FOURIER PTYCHOGRAPHY
2y 5m to grant Granted Mar 24, 2026
Patent 12581196
GUIDED REAL-TIME VEHICLE IMAGE ANALYZING DIGITAL CAMERA WITH AUTOMATIC PATTERN RECOGNITION AND ENHANCEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12579616
ENHANCED EXTENDED DEPTH OF FOCUSING ON BIOLOGICAL SAMPLES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
63%
Grant Probability
74%
With Interview (+11.8%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 487 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month