Prosecution Insights
Last updated: April 19, 2026
Application No. 18/529,780

LONG-RANGE OPTICAL DEVICE

Non-Final OA §101§103§112
Filed
Dec 05, 2023
Examiner
SOHRABY, PARDIS
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Swarovski-Optik AG & Co. KG
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant
89%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
73 granted / 92 resolved
+17.3% vs TC avg
Moderate +10% lift
Without
With
+9.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
21 currently pending
Career history
113
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
58.7%
+18.7% vs TC avg
§102
16.2%
-23.8% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 92 resolved cases

Office Action

§101 §103 §112
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. A50929/2022, filed on December 6, 2022 with Austrian Patent Office. The interim copy of the foreign priority document was filed on December 5, 2023, but the document indicating retrieval request was unsuccessful on May 06, 2024. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/05/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. The information disclosure statement (IDS) submitted on 06/04/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 16 is objected to because of the following informalities: claim is missing “an/one”, “wherein it comprises at least an/one objective”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Regarding claim 15, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and mathematical concepts and calculations). The claim(s) recite(s) a device configured to calculate similarity between images. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory). According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that claim 1 is directed to an abstract idea as shown below: STEP 1: Do the claims fall within one of the statutory categories? YES. Claim 1 is directed to a device. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims are directed toward a mental process (i.e. abstract idea). With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and • Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). The device in claim 1 comprise a mental process that can be practicably performed in the human mind (or generic computers or components configured to perform the method) and, therefore, an abstract idea. Regarding claim 1: the device recites the steps (functions) of: A long-range optical device configured to compare at least one image currently captured with the long-range optical device (data gathering and mental process including observation and evaluation, and can be done mentally in the human mind); with at least one reference image previously captured with the long-range optical device for similarity (generic computers or components configured to perform the method). and to calculate at least one degree for the similarity of the at least one currently captured image with the at least one reference image and, if the at least one degree of similarity reaches or exceeds or falls below at least one predetermined value, (mathematical concepts, mathematical relationships, mathematical formulas or equations, mathematical calculations); to output at least one indication for a user. (generic computers or components configured to perform the step); These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). As such, a person could mentally analyze an image and determine a fill level, either mentally or using a pen and paper. The mere nominal recitation that the various steps are being executed by a device/in a device (e.g. processing unit) does not take the limitations out of the mental process grouping. Thus, the claims recite a mental process. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. Claim 1 does not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO, the claims do not recite additional elements that amount to significantly more than the judicial exception. With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements: adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present. Claim 1 does not recite any additional elements that are not well-understood, routine or conventional. The use of a computer for “capturing, comparing, and calculating, etc., as claimed in Claim 1 is a routine, well-understood and conventional process that is performed by computers. Thus, since Claim 1 is: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that Claim 1 is not eligible subject matter under 35 U.S.C 101. Regarding claims 2-18: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 7, 13-18 are rejected under 35 U.S.C. 103 as being unpatentable over Shimada et al. (US 20130202163 A1) referred to as Shimada hereinafter and further in view of Dohr et al. (US 20180024376 A1) referred to as Dohr hereinafter. Regarding claim 1, Shimada teaches compare at least one image currently captured with the long-range optical device with at least one reference image previously captured with the long-range optical device for similarity and (“the image capturing apparatus 100 obtains similarity degree information related to a similarity degree between a predetermined reference image and an image of a region B, which corresponds to the candidate region A of the specific subject image, in another frame image obtained a predetermined number of frames before from one frame image.” Shimada, para. [0024]) to calculate at least one degree for the similarity of the at least one currently captured image with the at least one reference image and, if the at least one degree of similarity reaches or exceeds or falls below at least one predetermined value, (“in accordance with the image capturing apparatus 100 of this embodiment, when it is determined that the evaluation value (similarity degree) between the image of the candidate region A of the specific subject image (for example, the face image and the like) in the obtained one frame image Fn (for example, the live view display-use image generated from the captured image, and the like) and the predetermined reference image serving as the determination criteria of the specific subject image concerned is equal to or more than the first threshold value, the candidate region A of the specific subject image is specified as the image region D of the specific subject image” Shimada, para. [0122]) However, Shimada does not teach A long-range optical device configured to and to output at least one indication for a user. Dohr teaches A long-range optical device configured to (“a long-range optical device (1)” Dohr, abstract and fig. 1) and to output at least one indication for a user. (“A user thus sees a superimposing of the image of the remote object and of the image produced by the display device 4.” Dohr, para. [0075]) Shimada and Dohr are combinable because they are from the same field of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Shimada in light of Dohr’s long-range optical device. One would have been motivated to do so because it can help the observer read the results of the rangefinder in addition to seeing the image of the observed surroundings in the represented field of vision. (Dohr, para. [0003]) Regarding claim 2, Shimada teaches wherein the at least one indication is an optical and/or acoustic and/or haptic and/or mechanically and/or electromechanically generated indication. (“the imaging control section 2 drives the electronic imaging unit 1b in a scanning manner by the timing generator and the driver, and converts the optical image into the two-dimensional image signal by the electronic imaging unit 1b in every predetermined cycle.” (Shimada, para, [0030]) Regarding claim 3, Shimada teaches wherein it is configured to compare the at least one currently captured image with at least two previously captured reference images for similarity and, (“the related information obtaining unit 5f may obtain similarity degree information of a region B in a frame image F two frames before.” Shimada, para. [0076]) if the at least one degree for the similarity of the at least one currently captured image with one of the two reference images exceeds or falls below the at least one predetermined value, to output the indication. (“even if the evaluation value concerned is less than the first threshold value, then it can be specified whether or not the candidate region A belongs to the specific subject image based on the similarity degree information of the region B, which corresponds to the candidate region A of the specific subject image concerned, in the other frame image Fm, and the lowering of the detection rate of the specific subject can be suppressed.” Shimada, para. [0123]) Regarding claim 4, Shimada teaches wherein a first reference image of the at least two reference images defines a first border of a region and a second reference image of the at least two reference images defines a second border of the region. (“The tentative candidate detection unit c1 calculates the evaluation values for the respective discrimination target regions C in accordance with discrimination results of the sub-discriminators. Specifically, in the case where each of the discrimination target regions C is determined to be the face in all the sub-discriminators defined at each of the stages in the plurality of stages which define the plurality of sub-discriminators, the tentative candidate detection unit c1 passes the discrimination target region C through the stage concerned, and transfers the discrimination target region C to the next stage.” Shimada, para. [0054]) Regarding claim 7, Shimada teaches wherein it comprises at least one electronic image capturing sensor in the form of a CCD and/or CMOS and/or infrared sensor. (“an image sensor such as a charge coupled device (CCD) and a complementary metal-oxide semiconductor (CMOS)” Shimada, para. [0028]) Regarding claim 13, Shimada teaches wherein it comprises at least one actuator device to trigger a capturing of the at least one reference image. (“the operation input unit 10 includes a shutter button (not shown) for inputting an image capturing instruction of the subject, a selection deciding button (not shown) for inputting selection instructions for the image capturing mode” Shimada, para. [0097]) Regarding claim 14, Shimada teaches wherein it is configured to store the at least one reference image in an internal memory of the device after input of a command for storing the at least one reference image. (“As shown in FIG. 2, first, the central control section 11 sequentially stores such live view display-use image data of the frame images F, which are sequentially generated by the image data generation section 3 by the image capturing of the subjects by the imaging section 1, in the memory 4, and allows the memory 4 to temporarily memorize the image data (Step S1).” Shimada, para. [0103]) Shimada does not teach the long-range optical device, Dohr teaches the long-range optical device (“a long-range optical device (1)” Dohr, abstract and fig. 1) Shimada and Dohr are combinable because they are from the same field of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Shimada in light of Dohr’s long-range optical device. One would have been motivated to do so because it can help the observer read the results of the rangefinder in addition to seeing the image of the observed surroundings in the represented field of vision. (Dohr, para. [0003]) Regarding claim 15, Shimada does not teach wherein it is configured to continuously capture currently captured images and compare them with the at least one reference image after inputting a command and/or executing an action, such as panning the long-range optical device. Dohr teaches wherein it is configured to continuously capture currently captured images and compare them with the at least one reference image after inputting a command and/or executing an action, such as panning the long-range optical device. (“In this way images can also be still seen by the observer as mostly continually variable even in phases of rapid change of image information data.” Dohr, para. [0109]) Shimada and Dohr are combinable because they are from the same field of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Shimada in light of Dohr’s continuously capture currently captured images. One would have been motivated to do so because it can help the observer read the results of the rangefinder in addition to seeing the image of the observed surroundings in the represented field of vision. (Dohr, para. [0003]) Regarding claim 16, Shimada does not teach wherein it comprises at least objective and at least one eyepiece. Dohr teaches wherein it comprises at least objective and at least one eyepiece. (“wherein a joint is arranged in an objective housing and at least one lens of the objective is mounted movably by the joint in the objective housing,” Dohr, para. [0007]) and (“wherein at an object-side end of the lens tube a front objective lens system of the objective is arranged and an eyepiece side end of the lens tube is mounted pivotably in the bearing housing” Dohr, para. [0007]) Shimada and Dohr are combinable because they are from the same field of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Shimada in light of Dohr’s at least objective and at least one eyepiece. One would have been motivated to do so because it can help the observer read the results of the rangefinder in addition to seeing the image of the observed surroundings in the represented field of vision. (Dohr, para. [0003]) Regarding claim 17, Shimada does not teach wherein it comprises an electro-optical display device visible through the eyepiece and operable to output the indication. Dohr teaches wherein it comprises an electro-optical display device visible through the eyepiece and operable to output the indication. (“Said display device 4 comprises as its primary components a LCoS display 5 (LCoS=Liquid Cristal on Silicon) and a light source 6 for illuminating the LCoS display 5.” Dohr, para. [0096]) and (“Light reflected by the LCoS display 5 then passes through the illuminating prism 9 and the display prism 12 according to the display beam path 7, lastly to the observation beam path 8 (FIG. 7). The image produced on the LCoS display 5 is visible to an observer in this way through the eyepiece 3 superimposed with the image of a remote object (FIG. 1)” Dohr, para. [0119]) Shimada and Dohr are combinable because they are from the same field of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Shimada in light of Dohr’s an electro-optical display device. One would have been motivated to do so because it can help the observer read the results of the rangefinder in addition to seeing the image of the observed surroundings in the represented field of vision. (Dohr, para. [0003]) Regarding claim 18, Shimada teaches wherein it comprises at least one controller which is configured to calculate the degree of the similarity of the at least one currently captured image and the at least one reference image (“in accordance with the image capturing apparatus 100 of this embodiment, when it is determined that the evaluation value (similarity degree) between the image of the candidate region A of the specific subject image (for example, the face image and the like) in the obtained one frame image Fn (for example, the live view display-use image generated from the captured image, and the like) and the predetermined reference image serving as the determination criteria of the specific subject image concerned is equal to or more than the first threshold value, the candidate region A of the specific subject image is specified as the image region D of the specific subject image” Shimada, para. [0122]) and to control the generation and output of the indication. (“The display control section 8 performs control to read out the image data for use of display, which is temporarily memorized in the memory 4, and to allow the display section 9 to display the image data concerned thereon.” Shimada, para. [0094]) Claim(s) 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Shimada and Dohr as mentioned above and further in view of Van Geest et al. (US 20240080431 A1) referred to as Van Geest hereinafter. Regarding claim 5, the combination of Shimada and Dohr does not teach wherein it is configured to determine whether the currently captured image is inside or outside the region. Van Geest teaches wherein it is configured to determine whether the currently captured image is inside or outside the region. (“the image synthesis further comprises the image region circuit determining a second region for the first image region and wherein the view synthesis circuit is arranged to generate the view image with the image region being opaque if the view pose is inside the second region, partially transparent if the view pose is outside the second region and inside the first region, and fully transparent if the view pose is outside the first region.” Van Geest, para. [0029]) Shimada, Dohr, and Van Geest are combinable because they are from the same field of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Shimada and Dohr in light of Van Geest’s to determine whether the currently captured image is inside or outside the region. One would have been motivated to do so because it can provide a user with an improved experience of having a coherent and consistent movement in the scene. (Van Geest, para. [0014]) Regarding claim 6, Van Geest teaches wherein it is configured to generate an indication of whether the at least one currently captured image is inside or outside the region. (“the view image may be generated with image regions being opaque if the view pose is inside the inner viewing region and fully transparent if the view pose is outside the outer viewing region.” Van Geest, para. [0166]) Shimada, Dohr, and Van Geest are combinable because they are from the same field of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Shimada and Dohr in light of Van Geest’s to generate in indication of whether the currently captured image is inside or outside the region. One would have been motivated to do so because it can provide a user with an improved experience of having a coherent and consistent movement in the scene. (Van Geest, para. [0014]) Claim(s) 8-12 are rejected under 35 U.S.C. 103 as being unpatentable over Shimada and Dohr as mentioned above and further in view of Sano et al. (US 5638465 A) referred to as Sano hereinafter. Regarding claim 8, Dohr teaches wherein the long-range optical device (“a long-range optical device (1)” Dohr, abstract and fig. 1) the combination of Shimada and Dohr does not teach to determine at least one first frequency distribution of values of at least one characteristic image parameter in the at least one reference image and at least one second frequency distribution of values of the at least one characteristic image parameter in the at least one currently captured image and to compare the two frequency distributions with one another. However, Sano teaches is configured to determine at least one first frequency distribution of values of at least one characteristic image parameter in the at least one reference image and at least one second frequency distribution of values of the at least one characteristic image parameter in the at least one currently captured image and to compare the two frequency distributions with one another. (“The number of times the loci intersect, that is, the accumulated voting distribution, is obtained as indicated by the curve 37 in FIG. 4, for instance; the calculation of the maximum point (position) and maximum peak value (maximum frequency value) of the distribution is to calculate the position of the reference point in the input image 31 and its circularity relative to the standard image 21.” Sano, col. 4, 57-63) Shimada, Dohr, and Sano are combinable because they are from the same field of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Shimada and Dohr in light of Sano’s frequency distribution. One would have been motivated to do so because it can increase efficiency. (Sano, col. 17, line 58) Regarding claim 9, the combination of Shimada and Dohr does not teach wherein it is configured to calculate at least one correlation coefficient from the frequency distribution of the at least one reference image and the frequency distribution of the at least one currently captured image as a degree of the similarity of the at least one reference image with the at least one currently captured image. Sano teaches wherein it is configured to calculate at least one correlation coefficient from the frequency distribution of the at least one reference image and the frequency distribution of the at least one currently captured image as a degree of the similarity of the at least one reference image with the at least one currently captured image. (“The weighted similarity in the third step is obtained by the generalized Hough transform operation which votes the weights of feature points into the parameter space. Alternatively, a weighted normalized correlation operation is employed which uses a feature point weight sequence to obtain normalized correlation factors between a feature point value sequence of the feature image obtained from the input image and a feature point value sequence of the feature image from the training image.” Sano, col. 6, lines 48-56) Shimada, Dohr, and Sano are combinable because they are from the same field of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Shimada and Dohr in light of Sano’s correlation coefficient of frequency distribution. One would have been motivated to do so because it can increase efficiency. (Sano, col. 17, line 58) Regarding claim 10, Dohr teaches wherein the long-range optical device is configured (“a long-range optical device (1)” Dohr, abstract and fig. 1) However, the combination of Shimada and Dohr does not teach to calculate the at least one first frequency distribution and the at least one second frequency distribution each in the form of a histogram. Sano teaches wherein the long-range optical device is configured to calculate the at least one first frequency distribution and the at least one second frequency distribution each in the form of a histogram. (“Next, in the training feature point value distribution measuring step 105, a feature value histogram is generated, for each feature point, from a sequence of corrected training feature images (quantized images) stored for each feature. In this case, since a displacement correction error is usually present, the feature value histogram is produced covering pixel surrounding the noted feature point in correspondence with a permitted correction error. It is also possible to divide the corrected feature image into image blocks overlapping in correspondence with the permitted correction error and produce the histogram in each block. In FIG. 11 there are shown feature value histograms derived from a sequence of zero-crossing-point feature images (1 at the zero crossing point and 0 at other points) 52. A feature value histogram 54 near a boundary 53 is high in the frequency of the zero crossing point 1, indicating a good likelihood of the zero crossing point. On the other hand, a feature value histogram 56 at a portion 55 appreciably apart from the boundary is low in the frequency of the zero crossing point 1, indicating a small likelihood of the zero crossing point.” Sano, col. 13, lines 51-67, col. 14, lines 1-3) Shimada, Dohr, and Sano are combinable because they are from the same field of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Shimada and Dohr in light of Sano’s histogram. One would have been motivated to do so because it can increase efficiency. (Sano, col. 17, line 58) Regarding claim 11, Shimada teaches wherein the at least one characteristic image parameter is a grayscale and/or color value of an individual pixel. (“The image data generation section 3 appropriately performs gain adjustment for analog-value signals of the frame images F, which are transferred thereto from the electronic imaging unit 1b, for each of color components of R, G and B, thereafter, performs sample holding for the signals concerned by a sample-and-hold circuit (not shown), and coverts the signals into digital data by an A/D converter (not shown). Then, the image data generation section 3 performs color process treatment, which includes pixel interpolation processing and .gamma.-correction processing, for the digital data by a color process circuit (not shown), and thereafter, generates digital-value luminance signals Y and color-difference signals Cb and Cr (YUV data).” Shimada, para. [0032]) Regarding claim 12, Dohr teaches wherein the long-range optical device is configured (“a long-range optical device (1)” Dohr, abstract and fig. 1) the combination of Shimada and Dohr does not teach to determine a grayscale image from the at least one reference image and from the at least one currently captured image. Sano teaches to determine a grayscale image from the at least one reference image and from the at least one currently captured image. (“consider the case of extracting zero-crossing-point images which are contour feature images, from the inspected pattern images shown in FIG. 6. Such a zero-crossing-point image is obtained by calculating a quadratic differential image through the convolution of the original image and a Laplacian gaussian filter and by calculating zero crossing points of the quadratic differential value (corresponding to inflection points where the gray-scaled value of the original image changes).” Sano, col. 11, lines 30-38) Shimada, Dohr, and Sano are combinable because they are from the same field of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Shimada and Dohr in light of Sano’s determining gray-scaled image. One would have been motivated to do so because it can increase efficiency. (Sano, col. 17, line 58) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PARDIS SOHRABY whose telephone number is (571)270-0809. The examiner can normally be reached Monday - Friday 9 am till 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PARDIS SOHRABY/ Examiner, Art Unit 2664 /CHARLOTTE M BAKER/ Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Dec 05, 2023
Application Filed
Nov 28, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592015
PREDICTING SCATTERED SIGNAL OF X-RAY, AND CORRECTING SCATTERED BEAM
2y 5m to grant Granted Mar 31, 2026
Patent 12573236
FACIAL EXPRESSION-BASED DETECTION METHOD FOR DEEPFAKE BY GENERATIVE ARTIFICIAL INTELLIGENCE (AI)
2y 5m to grant Granted Mar 10, 2026
Patent 12567240
OPEN VOCABULARY INSTANCE SEGMENTATION WITH NOISE ESTIMATION AND ROBUST STUDENT
2y 5m to grant Granted Mar 03, 2026
Patent 12555378
IMAGE ANALYSIS SYSTEM, IMAGE ANALYSIS METHOD, AND PROGRAM
2y 5m to grant Granted Feb 17, 2026
Patent 12536666
Computer Software Module Arrangement, a Circuitry Arrangement, an Arrangement and a Method for Improved Image Processing
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
89%
With Interview (+9.7%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 92 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month