Prosecution Insights
Last updated: April 19, 2026
Application No. 18/455,934

THREE-DIMENSIONAL MEASURING APPARATUS, THREE-DIMENSIONAL MEASURING METHOD, STORAGE MEDIUM, SYSTEM, AND METHOD FOR MANUFACTURING AN ARTICLE

Final Rejection §103
Filed
Aug 25, 2023
Examiner
ALLEN, KYLA GUAN-PING TI
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
47 granted / 53 resolved
+26.7% vs TC avg
Strong +17% interview lift
Without
With
+17.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
30 currently pending
Career history
83
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
19.3%
-20.7% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 53 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments The amendments to claims 1, 5-8, 10-13, 15, 16, 18-29, and 31 are accepted and entered. Claims 2-4 and 17 are cancelled. Claims 1, 5-16, and 18-31 are pending regarding this application. Response to Arguments Applicant’s arguments, see Remarks, filed 01/27/2026, with respect to the Claim Objections applied to claims 1, 28, 29, and 31 have been fully considered and are persuasive. The Claim Objections applied to claims 1, 28, 29, and 31 have been withdrawn. Applicant’s arguments, see Remarks, filed 01/27/2026, with respect to the 112(b) Rejections applied to claims 1-12, 14, 16-27, and 30 have been fully considered and are persuasive. The 112(b) Rejections applied to claims 1-12, 14, 16-27, and 30 have been withdrawn. Applicant’s arguments with respect to claim(s) 1, 2, 5, 6, 12, 13, 16, 23-25, 28, and 29 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/03/2025 is considered and attached. Claim Objections Claims 16, 20, 22, 25, 26, and 27 are objected to because of the following informalities: Claim 16 recites “a wavelength of an illuminating light, image characteristics caused by a light source of an illuminating light, or a size of features caused by a pattern of an illuminating light in the images” in lines 13-15. There should not be multiple instances of “an illuminating light” as it creates confusion regarding whether each instance of “illuminating light” is equivalent or distinct. As a result, please amend the above subject matter of claim 16 to recite “wavelength of an illuminating light, image characteristics caused by a light source of [[an]]the illuminating light, or a size of features caused by a pattern of [[an]]the illuminating light in the images”. Claim 20 recites “a wavelength of the illuminating light” in lines 3-4. However, claim 16, upon which claim 20 depends, already recites “a wavelength of an illuminating light” in line 13. As a result, please amend claim 20 to recite “[[a]]the wavelength of the illuminating light”. Claim 22 recites “sizes of features resulting from a pattern of the illuminating light” in line 4. However, claim 16, upon which claim 22 depends, already recites “a pattern of an illuminating light” in lines 14-15. As a result, please amend claim 22 to recite “sizes of features resulting from [[a]]the pattern of the illuminating light”. Claim 25 recites “a pattern of the illuminating light onto the subject” in line 3 and “a first image group that includes images in which the pattern of the illuminating light has been projected onto the subject by the projection unit, and a second image group that includes images in which the pattern of the illuminating light has not been projected onto the subject” in lines 4-7. However, claim 16, upon which claim 25 depends, already recites a pattern of an illuminating light, a first image group, and a second image group (see “a pattern of an illuminating light” in line 14, “a first image group” in line 7, and “a second image group” in line 8). As a result, please amend claim 25 to recite “[[a]]the pattern of the illuminating light onto the subject” and “[[a]]the first image group that includes images in which the pattern of the illuminating light has been projected onto the subject by the projection unit, and [[a]]the second image group that includes images in which the pattern of the illuminating light has not been projected onto the subject”. Similarly, claim 26 recites “a first image group that includes images in which the first illuminating light pattern was projected onto the subject, and a second image group that includes images in which the second illuminating light pattern was projected onto the subject” in lines 6-8 and “at least one of a light source, a wavelength, or an illuminating light pattern, onto the subject” in lines 3-5. However, claim 16, upon which claim 26 depends, already recites a first image group, a second image group, a light source, a wavelength, or an illuminating light pattern (see “a first image group” in line 7, “a second image group” in line 8, see “a light source” in line 14, “a wavelength” in line 13, and “a pattern of an illuminating light” in lines 14-15 of claim 16). As a result, please amend claim 25 to recite “[[a]]the first image group that includes images in which the first illuminating light pattern was projected onto the subject, and [[a]]the second image group that includes images in which the second illuminating light pattern was projected onto the subject” and “at least one of [[a]]the light source, [[a]]the wavelength, or [[an]]the illuminating light pattern, onto the subject”. Claim 27 recites “irradiating an illuminating light”. However, claim 16, upon which claim 27 depends, already recites “an illuminating light”. As such, please amend claim 27 to recite “irradiating [[an]]the illuminating light”. As a result, the claims will be analyzed below assuming these changes were made. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: In claim 1: “a projection unit configured to project a pattern” and “a plurality of image capturing units configured to capture images” In claim 16: “a plurality of image capturing units configured to capture images” In claim 25: “a projection unit configured to project a pattern” After a careful analysis, as disclosed above, and a careful review of the specification, the above limitations in claims 1, 16, and 25 are not interpreted as computer-implemented 112(f), as they have a corresponding physical structure. See MPEP 2181. Below is the corresponding structure which is being read into the above limitations: “a projection unit” (See FIG. 1, #102. In the specification, para. [0033] defines the projection unit 102 as a projector that projects light onto a subject until receiving a stop command from the control unit. Therefore, the structure of the projection unit is a physical entity capable of projecting patterned light). “a plurality of image capturing units” (See FIG. 1, #103, #104. The specification, defines the capturing units as apparatuses capable of capturing images of according to the control unit in para. [0036]-[0037]. Therefore, the structure of the plurality of image capturing units are physical apparatuses capable of capturing images (camera). See also para. [0053]-[0054]). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 16, 18, 23-25, 28, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Matsumoto (U.S. Publication No. 2022/0292703 A1) in view of Masuda (U.S. Publication No. 2010/0091301 A1). Regarding claim 16, Matsomoto teaches a three-dimensional measuring apparatus (Matsumoto teaches “a three-dimensional (3D) measurement system” in para. [0037] and FIG. 1) comprising: a plurality of image capturing units configured to capture images of a subject from mutually different points of view (Matsumoto teaches “multiple sensor units 10 may be included in one image processor 11, or the output from one sensor unit 10 may be provided to multiple image processors 11. The sensor unit 10 may be attached to a robot or a movable object to have a movable viewpoint” para. [0038]; see also that “the first camera 101 and the second camera 102 are a pair of cameras to form a stereo camera and are spaced from each other by a predetermined distance” in para. [0067]); and at least one processor or circuit configured to function as: a processing unit configured to calculate a distance value to the subject (Matsumoto teaches “ the image processor 11 (map generator) reconstructs 3D information from the patterned image 30 received in step S201 and generates a map 31 that is data including pixels each associated with depth-related information. The depth-related information may be a depth or information convertible into a depth (e.g., parallax information)” in para. [0042]) by performing association using a first image (Matsumoto teaches “In step 200, the sensor unit 10 (first projector) projects patterned light onto the target object 12. The patterned light may have a fixed pattern, a random pattern, or a striped pattern used in phase shifting. An appropriate pattern may be selected in accordance with the algorithm for reconstructing 3D information. In step S201, the sensor unit 10 (camera) images the target object 12 while patterned light is being projected onto the target object 12, to produce an image 30 (hereafter also referred to as a patterned image, or an image on which a pattern is projected)” in para. [0040]-[0041]), and a second image (Matsumoto teaches “ the sensor unit 10 (camera) images the target object 12 to produce an image 32 (hereafter also referred to as an unpatterned image, or an image on which no pattern is projected)” in para. [0044], wherein the unpatterned image is interpreted as a condition different from the first condition); wherein the processing unit is further configured to perform the association on the first image a wavelength of an illuminating light, image characteristics caused by a light source of an illuminating light (Matsumoto teaches the association of the first and second image based on a difference in edge detection (image characteristics) caused by a difference in light source of an illuminating light (patterned vs unpatterned light) in para. [0045]-[0046]), or a size of features caused by a pattern of an illuminating light in the images for the first image or performs the association after having performed a predetermined processing on the first image group and the second image group based on the information, and wherein, based on the information, the processing unit is further configured to calculate evaluation values to be used in the association from images from the first image (Matsumoto ‘703 teaches “the image obtainer 110 transmits the first and second images or a pair of stereo images captured with projected patterned light to the preprocessor 800 and transmits an image captured with projected unpatterned light to the edge detector 112”, wherein “the corresponding point searcher 801 searches for corresponding points between the first and second images and generates a parallax map based on the search results” in para. [0072]; here, the corresponding point searcher uses luminance values/hash feature values to determine corresponding points in para. [0077]. These hash feature values are interpreted as equivalent to the claimed evaluation values, wherein the corresponding point correction determination based on the edge of the unpatterned image is interpreted as equivalent to the association. See para. [0068] wherein the process of obtaining the first/second image is equivalent to the first embodiment cited to in the above sections), and calculate the distance value using the parallax value that was obtained by the association (Matsumoto teaches “ the image processor 11 (corrector) corrects the map 31 obtained in step S202 based on information about an edge 33 obtained in step S205” in para. [0046]; see also “the depth map generator 803 converts parallax information in the parallax map into distance information to generate a depth map” in para. [0072])). ***Note: only one path within the following section of the claim language needs to be examined here. “wherein the processing unit (path 1) performs the association on the first image groups and the second image groups based on information of image characteristics based on (path 1.1) a wavelength of an illuminating light, (path 1.2) image characteristics caused by a light source of an illuminating light, or (path 1.3) a size of features caused by a pattern of an illuminating light in the images for the first image group and the second image group, or (path 2) performs the association after having performed a predetermined processing on the first image group and the second image group based on the information”. Path 1.2 was mapped to in the above claim language. Matsumoto fails to teach a first image group and a second image group. However, Masuda teaches a first image group and a second image group (Masuda teaches “combining unit 130 generates composite image C1 of non-illumination base image G11 and illumination base image G11′, and composite image C2 of non-illumination reference image G12 and illumination reference image G12′ (step ST171)” in para. [0141], “whereby a three-dimensional shape of the subject may be measured accurately” as shown in para. [0146], and wherein “non-illumination base image G11 and non-illumination reference image G12 correspond to first measuring images, and illumination base image G11′ and illumination reference image G12′ correspond to second measuring images” as shown in para. [0113] (the embodiment as described in para. [0139]-[0147] differs only from the embodiment described in [0113] “in the configuration of calculation unit 105” as shown in para. [0139]). Here, the first measuring images correspond to the first image group and the second measuring images correspond to the second image group). Matsumoto and Masuda are both considered to be analogous to the claimed invention because they are in the same field of performing three-dimensional measuring through illuminated patterned image integration. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Matsumoto to incorporate the teachings of Masuda and include “a first image group and a second image group”. The motivation for doing so would have been such that “even when a portion having a high contrast or a local characteristic is included in the angle of view, high quality composite images C1, C2 without a moire and the like may be obtained and corresponding points may be obtained accurately, whereby a three-dimensional shape of the subject may be measured accurately”, as suggested by Masuda in para. [0146]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Matsumoto with Masuda to obtain the invention specified in claim 16. Regarding claim 18, Matsumoto and Masuda teach the three-dimensional measuring apparatus according to claim 16, wherein the processing unit is further configured to calculate the evaluation values based on luminance value of the images from the first image group and the second image group (Matsumoto teaches that “converting the luminance value of each pixel into a hash feature quantity allows efficient calculation of the degree of similarity in the local luminance feature in the search for corresponding points” in para. [0077]; here, the hash feature quantities are interpreted as equivalent to the evaluation values). Regarding claim 23, Matsumoto and Masuda teach the three-dimensional measuring apparatus according to claim 16, wherein the processing unit is further configured to generate a third image to serve as an integrated image based on a first image that is included in the first image group and a second image that is included in the second image group (Matsumoto teaches determining a parallax map based on the first and second image in para. [0072], wherein the parallax map may be broadly interpreted as the third image and Masuda teaches creating composite images (third images) from the first and second groups by “generating a composite image group constituted by a plurality of composite images from the non-illumination area and illumination area extracted from the first and second measurement image groups corresponding to each other” in para. [0139]). Similar motivations as applied to claim 16 can be applied to claim 23. Regarding claim 24, Matsumoto and Masuda teach three-dimensional measuring apparatus according to claim 23, wherein the processing unit is further configured to calculate the distance value by using the third image (Matsumoto teaches determining a depth value from the parallax map in para. [0072] and FIG. 9; Masuda also teaches a composite image (third image) in para. [0142-0143] whereby “a three-dimensional shape of the subject may be measured” in para. [0146]; this measurement, along with the depth map, is interpreted as the distance value (See also para. [0087] and [0083]). Similar motivations as applied to claim 16 can be applied to claim 24. Regarding claim 25, Matsumoto and Masuda teach the three-dimensional measuring apparatus according to claim 16, further including: a projection unit configured to project a pattern of the illuminating light onto the subject (Matsumoto teaches “The sensor unit 10 includes at least a first projector for projecting patterned light, a second projector for projecting unpatterned light” onto a target object in para. [0037]); and wherein the plurality of image capturing units is further configured to acquire a first image group that includes images in which the pattern of the illuminating light has been projected onto the subject by the projection unit (Matsumoto teaches “In step 200, the sensor unit 10 (first projector) projects patterned light onto the target object 12. The patterned light may have a fixed pattern, a random pattern, or a striped pattern used in phase shifting. An appropriate pattern may be selected in accordance with the algorithm for reconstructing 3D information. In step S201, the sensor unit 10 (camera) images the target object 12 while patterned light is being projected onto the target object 12, to produce an image 30 (hereafter also referred to as a patterned image, or an image on which a pattern is projected)” in para. [0040]-[0041]. Masuda additionally teaches the plurality of images within the first image group as shown in claim 16), and a second image group that includes images in which the pattern of the illuminating light has not been projected onto the subject (Matsumoto teaches “ the sensor unit 10 (camera) images the target object 12 to produce an image 32 (hereafter also referred to as an unpatterned image, or an image on which no pattern is projected)” in para. [0044], wherein the unpatterned image is interpreted as a condition different from the first condition. Masuda additionally teaches the plurality of images within the second image group as shown in claim 16). Similar motivations as applied to claim 16 can be applied to claim 25. Regarding claim 27, Matsumoto and Masuda teaches three-dimensional measuring apparatus according to claim 16, wherein the image capturing unit is further configured to acquire a first image that is included in the first image group and a second image that is included in the second image group from images that were acquired during one image capturing session by irradiating an illuminating light with a specific wavelength (Matsumoto teaches “the illuminator 104 is a uniform illuminator usable for capturing typical visible light images. The illuminator 104 may be, for example, a white LED illuminator or an illuminator having the same wavelength band as the pattern projector 103” in para. [0057]; the first and second images being captured by the above illuminator/projector in the same capturing session is taught in para. [0067]). Regarding claim 28, Matsumoto teaches a three-dimensional measurement method (Matsumoto, see FIG. 2) comprising: capturing images of a subject from mutually different points of view (Matsumoto teaches “multiple sensor units 10 may be included in one image processor 11, or the output from one sensor unit 10 may be provided to multiple image processors 11. The sensor unit 10 may be attached to a robot or a movable object to have a movable viewpoint” para. [0038]; see also that “the first camera 101 and the second camera 102 are a pair of cameras to form a stereo camera and are spaced from each other by a predetermined distance” in para. [0067]); and processing to calculate a distance value to the subject (Matsumoto teaches “ the image processor 11 (map generator) reconstructs 3D information from the patterned image 30 received in step S201 and generates a map 31 that is data including pixels each associated with depth-related information. The depth-related information may be a depth or information convertible into a depth (e.g., parallax information)” in para. [0042]) by performing association using a first image (Matsumoto teaches “In step 200, the sensor unit 10 (first projector) projects patterned light onto the target object 12. The patterned light may have a fixed pattern, a random pattern, or a striped pattern used in phase shifting. An appropriate pattern may be selected in accordance with the algorithm for reconstructing 3D information. In step S201, the sensor unit 10 (camera) images the target object 12 while patterned light is being projected onto the target object 12, to produce an image 30 (hereafter also referred to as a patterned image, or an image on which a pattern is projected)” in para. [0040]-[0041]), and a second image (Matsumoto teaches “ the sensor unit 10 (camera) images the target object 12 to produce an image 32 (hereafter also referred to as an unpatterned image, or an image on which no pattern is projected)” in para. [0044], wherein the unpatterned image is interpreted as a condition different from the first condition), wherein during the processing process, the association of the first image a wavelength of an illuminating light, image characteristics caused by a light source of an illuminating light (Matsumoto teaches the association of the first and second image based on a difference in edge detection (image characteristics) caused by a difference in light source of an illuminating light (patterned vs unpatterned light) in para. [0045]-[0046]), or a size of features caused by the pattern of the illuminating light in the images for each of the first image or the association is performed after having performed processing before the association on the first image group and second image group based on the information, and wherein, based on the information, during the processing process, evaluation values to be used in the association from images from the first image (Matsumoto ‘703 teaches “the image obtainer 110 transmits the first and second images or a pair of stereo images captured with projected patterned light to the preprocessor 800 and transmits an image captured with projected unpatterned light to the edge detector 112”, wherein “the corresponding point searcher 801 searches for corresponding points between the first and second images and generates a parallax map based on the search results” in para. [0072]; here, the corresponding point searcher uses luminance values/hash feature values to determine corresponding points in para. [0077]. These hash feature values are interpreted as equivalent to the claimed evaluation values, wherein the corresponding point correction determination based on the edge of the unpatterned image is interpreted as equivalent to the association. See para. [0068] wherein the process of obtaining the first/second image is equivalent to the first embodiment cited to in the above sections), and the distance value is calculated using a parallax value that was obtained by the association (Matsumoto teaches “ the image processor 11 (corrector) corrects the map 31 obtained in step S202 based on information about an edge 33 obtained in step S205” in para. [0046]; see also “the depth map generator 803 converts parallax information in the parallax map into distance information to generate a depth map” in para. [0072])). Matsumoto fails to teach a first image group and a second image group. However, Masuda teaches a first image group and a second image group (Masuda teaches “combining unit 130 generates composite image C1 of non-illumination base image G11 and illumination base image G11′, and composite image C2 of non-illumination reference image G12 and illumination reference image G12′ (step ST171)” in para. [0141], “whereby a three-dimensional shape of the subject may be measured accurately” as shown in para. [0146], and wherein “non-illumination base image G11 and non-illumination reference image G12 correspond to first measuring images, and illumination base image G11′ and illumination reference image G12′ correspond to second measuring images” as shown in para. [0113] (the embodiment as described in para. [0139]-[0147] differs only from the embodiment described in [0113] “in the configuration of calculation unit 105” as shown in para. [0139]). Here, the first measuring images correspond to the first image group and the second measuring images correspond to the second image group). Matsumoto and Masuda are both considered to be analogous to the claimed invention because they are in the same field of performing three-dimensional measuring through illuminated patterned image integration. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Matsumoto to incorporate the teachings of Masuda and include “a first image group and a second image group”. The motivation for doing so would have been such that “even when a portion having a high contrast or a local characteristic is included in the angle of view, high quality composite images C1, C2 without a moire and the like may be obtained and corresponding points may be obtained accurately, whereby a three-dimensional shape of the subject may be measured accurately”, as suggested by Masuda in para. [0146]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Matsumoto with Masuda to obtain the invention specified in claim 28. Regarding claim 29, Matsumoto teaches a non-transitory computer-readable storage medium storing a computer program including instructions (Matsumoto, see para. [0017] “non-transitory storage medium storing the program”) for executing a method comprising: processing to perform association using a first image (Matsumoto teaches “multiple sensor units 10 may be included in one image processor 11, or the output from one sensor unit 10 may be provided to multiple image processors 11. The sensor unit 10 may be attached to a robot or a movable object to have a movable viewpoint” para. [0038]; see also that “the first camera 101 and the second camera 102 are a pair of cameras to form a stereo camera and are spaced from each other by a predetermined distance” in para. [0067]; “In step 200, the sensor unit 10 (first projector) projects patterned light onto the target object 12. The patterned light may have a fixed pattern, a random pattern, or a striped pattern used in phase shifting. An appropriate pattern may be selected in accordance with the algorithm for reconstructing 3D information. In step S201, the sensor unit 10 (camera) images the target object 12 while patterned light is being projected onto the target object 12, to produce an image 30 (hereafter also referred to as a patterned image, or an image on which a pattern is projected)” in para. [0040]-[0041]; the patterned image is interpreted as equivalent to the first image), and a second image (Matsumoto teaches “ the sensor unit 10 (camera) images the target object 12 to produce an image 32 (hereafter also referred to as an unpatterned image, or an image on which no pattern is projected)” in para. [0044], wherein the unpatterned image is interpreted as a condition different from the first condition, and the unpatterned image is interpreted as equivalent to the second image), and thereby calculate a distance value to the subject (Matsumoto teaches “ the image processor 11 (map generator) reconstructs 3D information from the patterned image 30 received in step S201 and generates a map 31 that is data including pixels each associated with depth-related information. The depth-related information may be a depth or information convertible into a depth (e.g., parallax information)” in para. [0042]); wherein during the processing process, the association of the first image a wavelength of an illuminating light, image characteristics caused by a light source of an illuminating light (Matsumoto teaches the association of the first and second image based on a difference in edge detection (image characteristics) caused by a difference in light source of an illuminating light (patterned vs unpatterned light) in para. [0045]-[0046]), or a size of features caused by the pattern of the illuminating light in each of the images from the first image or the association is performed after having performed a predetermined processing before the association on the first image group and the second image group based on the information; and wherein, based on the information, during the processing process, evaluation values to be used in the association from images from the first image (Matsumoto teaches “the image obtainer 110 transmits the first and second images or a pair of stereo images captured with projected patterned light to the preprocessor 800 and transmits an image captured with projected unpatterned light to the edge detector 112”, wherein “the corresponding point searcher 801 searches for corresponding points between the first and second images and generates a parallax map based on the search results” in para. [0072]; here, the corresponding point searcher uses luminance values/hash feature values to determine corresponding points in para. [0077]. These hash feature values are interpreted as equivalent to the claimed evaluation values, wherein the corresponding point correction determination based on the edge of the unpatterned image is interpreted as equivalent to the association. See para. [0068] wherein the process of obtaining the first/second image is equivalent to the first embodiment cited to in the above sections), and the distance value is calculated using a parallax value that was obtained by the association (Matsumoto teaches “ the image processor 11 (corrector) corrects the map 31 obtained in step S202 based on information about an edge 33 obtained in step S205” in para. [0046]; see also “the depth map generator 803 converts parallax information in the parallax map into distance information to generate a depth map” in para. [0072])). ***Note: only one path within the following section of the claim language needs to be examined here. “wherein the processing unit (path 1) performs the association on the first image groups and the second image groups based on information of image characteristics based on (path 1.1) a wavelength of an illuminating light, (path 1.2) image characteristics caused by a light source of an illuminating light, or (path 1.3) a size of features caused by a pattern of an illuminating light in the images for the first image group and the second image group, or (path 2) performs the association after having performed a predetermined processing on the first image group and the second image group based on the information”. Path 1.2 was mapped to in the above claim language. Matsumoto fails to teach a first image group and a second image group. However, Masuda teaches a first image group and a second image group (Masuda teaches “combining unit 130 generates composite image C1 of non-illumination base image G11 and illumination base image G11′, and composite image C2 of non-illumination reference image G12 and illumination reference image G12′ (step ST171)” in para. [0141], “whereby a three-dimensional shape of the subject may be measured accurately” as shown in para. [0146], and wherein “non-illumination base image G11 and non-illumination reference image G12 correspond to first measuring images, and illumination base image G11′ and illumination reference image G12′ correspond to second measuring images” as shown in para. [0113] (the embodiment as described in para. [0139]-[0147] differs only from the embodiment described in [0113] “in the configuration of calculation unit 105” as shown in para. [0139]). Here, the first measuring images correspond to the first image group and the second measuring images correspond to the second image group). Matsumoto and Masuda are both considered to be analogous to the claimed invention because they are in the same field of performing three-dimensional measuring through illuminated patterned image integration. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Matsumoto to incorporate the teachings of Masuda and include “a first image group and a second image group”. The motivation for doing so would have been such that “even when a portion having a high contrast or a local characteristic is included in the angle of view, high quality composite images C1, C2 without a moire and the like may be obtained and corresponding points may be obtained accurately, whereby a three-dimensional shape of the subject may be measured accurately”, as suggested by Masuda in para. [0146]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Matsumoto with Masuda to obtain the invention specified in claim 29. Claims 19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Matsumoto (U.S. Publication No. 2022/0292703 A1) in view of Masuda (U.S. Publication No. 2010/0091301 A1) and Grobe et al. (U.S. Publication No. 2020/0082605 A1), hereinafter Grobe. Regarding claim 19, Matsumoto and Masuda teach a three-dimensional measuring apparatus according to claim 16. Matsumoto and Masuda fail to teach wherein the processing unit is further figured to calculate the evaluation values by using hamming distances for bit sequences obtained by performing predetermined conversion processing on each of the images from the first image group and the second image group. However, Grobe teaches wherein the processing unit is further configured to calculate the evaluation values by using hamming distances for bit sequences obtained by performing predetermined conversion processing on each of the images from the first image group and the second image group (Grobe teaches “comparing the first binary pixel fingerprint and the second binary pixel fingerprint may include determining whether corresponding binary values in the first and second binary pixel fingerprints are within a threshold hamming distance” in para. [0023]; see also para. [0062]). Matsumoto, Masuda, and Grobe, are all considered to be analogous to the claimed invention because they are in the same field of three-dimensional measuring. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Matsumoto and Masuda to incorporate the teachings of Grobe and include “wherein the processing unit is further configured to calculate the evaluation values by using hamming distances for bit sequences obtained by performing predetermined conversion processing on each of the images from the first image group and the second image group”. The motivation for doing so would have been to “efficiently processing data of initial correspondence assignments”, as suggested by Grobe in para. [0051], respectively. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Matsumoto and Masuda with Grobe to obtain the invention specified in claim 19. Regarding claim 21, Matsumoto and Masuda teach the three-dimensional measuring apparatus according to claim 16. Matsumoto and Masuda fails to teach wherein the processing unit is further configured to perform predetermined noise elimination processing on at least one of the images from the first image group and the second image group. However, Grobe teaches wherein the processing unit is further configured to perform predetermined noise elimination processing on at least one of the images from the first image group and the second image group (Grobe teaches “the system can apply a filter (e.g., a weak smoothing filter) to the image sequences prior to the initial correspondence assignment, which may provide 100% of the correspondences (e.g., without the need for a hole filling process). The filter may reduce noise and, to some extent, aggregate a small spatial neighborhood per temporal image” in para. [0071]; the first and second set of images are taught in para. [0009]). Masuda, Matsumoto, and Grobe are all considered to be analogous to the claimed invention because they are in the same field of three-dimensional measuring. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Matsumoto (as modified by Masuda) to incorporate the teachings of Grobe and include “wherein the processing unit is further configured to perform predetermined noise elimination processing on at least one of the images from the first image group and the second image group”. The motivation for doing so would have been that “the filter may reduce noise and, to some extent, aggregate a small spatial neighborhood per temporal image”, as suggested by Grobe in para. [0071]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Matsumoto and Masuda with Grobe to obtain the invention specified in claim 21. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Matsumoto (U.S. Publication No. 2022/0292703 A1) in view of Masuda (U.S. Publication No. 2010/0091301 A1) and Yamazaki (JP 2018116032 A, see English translation for citations). Regarding claim 20, Matsumoto and Masuda teach the three-dimensional measuring apparatus according to claim 16. Matsumoto and Masuda fail to teach wherein the processing unit is further configured to perform image distortion correction on at least one of the first image group and the second image group based on a wavelength of the illuminating light. However, Yamazaki teaches wherein the processing unit is further configured to perform image distortion correction on at least one of the first image group and the second image group based on a wavelength of the illuminating light (Yamazaki teaches “it is necessary to calibrate the positional relationship between the pixel coordinates of the output pattern image having the wavelength λ1 and the pixel coordinates of the pattern image having the wavelength λ2 and the surface of the object to be measured” in para. [0020]; here, the calibration is interpreted as equivalent to performing image distortion correction). Matsumoto, Masuda, and Yamazaki are all considered to be analogous to the claimed invention because they are in the same field of three-dimensional measuring. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Matsumoto (as modified by Masuda) to incorporate the teachings of Yamazaki and include “wherein the processing unit is further configured to perform image distortion correction on at least one of the first image group and the second image group based on a wavelength of the illuminating light”. The motivation for doing so would have been to “reduce the influence of deterioration in accuracy due to the reflectance distribution of an object to be measured when performing measurement”, as suggested by Yamazaki in para. [0008], respectively. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Matsumoto and Masuda with Yamazaki to obtain the invention specified in claim 20. Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Matsumoto (U.S. Publication No. 2022/0292703 A1) in view of Masuda (U.S. Publication No. 2010/0091301 A1) and Matsumoto (U.S. Publication No. 2022/0398760 A1), hereinafter Matsumoto ‘760. Regarding claim 26, Matsumoto and Masuda teaches the three-dimensional measuring apparatus according to claim 16. Matsumoto and Masuda fail to teach a projection unit that is able to project at least a first illuminating light pattern and a second illuminating light pattern that differs from the first illuminating light pattern in at least one of a light source, a wavelength, or an illuminating light pattern, onto the subject, wherein the plurality of image capturing units is further configured to acquire a first image group that includes images in which the first illuminating light pattern was projected onto the subject, and a second image group that includes images in which the second illuminating light pattern was projected onto the subject. While Matsumoto teaches the first and second images as shown in claim 16 and a projection unit that is able to project at least a first illuminating light pattern and a second illuminating light pattern that differs from the first illuminating light pattern in at least one of a light source, a wavelength, or an illuminating light pattern, onto the subject (Matsumoto ‘760 teaches “the exposure conditions of the camera are changed, but instead, changing the illumination conditions allows the same processing to be performed. That is, in the first measurement, an image is taken with the pattern projector 103 set to the first brightness, and in the second measurement, an image is taken with the pattern projector 103 changed to the second brightness” in para. [0092]), wherein the plurality of image capturing units is further configured to acquire a first image group that includes images in which the first illuminating light pattern was projected onto the subject, and a second image group that includes images in which the second illuminating light pattern was projected onto the subject (see above citation). Matsumoto, Masuda, and Matsumoto ‘760 are both considered to be analogous to the claimed invention because they are in the same field of three-dimensional measuring. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Matsumoto (as modified by Masuda) to incorporate the teachings of Matsumoto ‘760 and include “a projection unit that is able to project at least a first illuminating light pattern and a second illuminating light pattern that differs from the first illuminating light pattern in at least one of a light source, a wavelength, or an illuminating light pattern, onto the subject, wherein the plurality of image capturing units is further configured to acquire a first image group that includes images in which the first illuminating light pattern was projected onto the subject, and a second image group that includes images in which the second illuminating light pattern was projected onto the subject”. The motivation for doing so would have been to “enable[] the three-dimensional measurement robust against differences in reflection characteristics or illumination conditions”, as suggested by Matsumoto ‘760 in para. [0092], respectively. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Matsumoto and Masuda with Matsumoto ‘760 to obtain the invention specified in claim 26. Claims 30 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over over Matsumoto (U.S. Publication No. 2022/0292703 A1) in view of Masuda (U.S. Publication No. 2010/0091301 A1) and Matsuura (U.S. Publication No. 2017/0323456 A1). Regarding claim 30, Matsumoto and Masuda teach a system comprising: the three-dimensional measuring apparatus according to claim 16. While Matsumoto teaches a robot which may move an object in para. [0037]-[0038]. Ma Matsumoto and Masuda fails to teach a robot that holds and moves the subject based on the distance value to the subject that has been calculated by the three-dimensional measuring apparatus. However, Matsuura teaches a robot that holds and moves the subject based on the distance value to the subject that has been calculated by the three-dimensional measuring apparatus (Matsuura teaches “the robot 4 executes processing of the object 2 based on the information about the position and the orientation… for example, the object 2 is held and moved (i.e., translated or rotated) by a hand (a holding unit or an end effector) attached to a leading end of the robot 4” in para. [0050]; see also claim 16). Matsumoto, Masuda, and Matsuura are all considered to be analogous to the claimed invention because they are in the same field of three-dimensional measuring. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Matsumoto (as modified by Masuda) to incorporate the teachings of Matsuura and include “holding the subject to move based on the distance value to the subject that has been calculated by the processing; and manufacturing a predetermined article by performing a predetermined processing on the subject”. The motivation for doing so would have been “so that the robot 4 can execute processing of gripping the component 2 with high precision”, as suggested by Matsuura in para. [0018], respectively. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Matsumoto and Masuda with Matsuura to obtain the invention specified in claim 30. Regarding claim 31, Masuda teaches a method for manufacturing an article, comprising: capturing images of a subject from mutually different points of view (Matsumoto teaches “multiple sensor units 10 may be included in one image processor 11, or the output from one sensor unit 10 may be provided to multiple image processors 11. The sensor unit 10 may be attached to a robot or a movable object to have a movable viewpoint” para. [0038]; see also that “the first camera 101 and the second camera 102 are a pair of cameras to form a stereo camera and are spaced from each other by a predetermined distance” in para. [0067]); and processing to calculate a distance value to the subject (Matsumoto teaches “ the image processor 11 (map generator) reconstructs 3D information from the patterned image 30 received in step S201 and generates a map 31 that is data including pixels each associated with depth-related information. The depth-related information may be a depth or information convertible into a depth (e.g., parallax information)” in para. [0042]) by performing association using a first image (Matsumoto teaches “In step 200, the sensor unit 10 (first projector) projects patterned light onto the target object 12. The patterned light may have a fixed pattern, a random pattern, or a striped pattern used in phase shifting. An appropriate pattern may be selected in accordance with the algorithm for reconstructing 3D information. In step S201, the sensor unit 10 (camera) images the target object 12 while patterned light is being projected onto the target object 12, to produce an image 30 (hereafter also referred to as a patterned image, or an image on which a pattern is projected)” in para. [0040]-[0041]), and a second image (Matsumoto teaches “ the sensor unit 10 (camera) images the target object 12 to produce an image 32 (hereafter also referred to as an unpatterned image, or an image on which no pattern is projected)” in para. [0044], wherein the unpatterned image is interpreted as a condition different from the first condition), wherein during the processing process, the association of the first image a wavelength of an illuminating light, image characteristics caused by a light source of an illuminating light (Matsumoto teaches the association of the first and second image based on a difference in edge detection (image characteristics) caused by a difference in light source of an illuminating light (patterned vs unpatterned light) in para. [0045]-[0046]), or a size of features caused by the pattern of the illuminating light in the images for each of the first image or the association is performed after having performed processing before the association on the first image group and second image group based on the information, and wherein, based on the information, during the processing process, evaluation values to be used in the association from images from the first image (Matsumoto ‘703 teaches “the image obtainer 110 transmits the first and second images or a pair of stereo images captured with projected patterned light to the preprocessor 800 and transmits an image captured with projected unpatterned light to the edge detector 112”, wherein “the corresponding point searcher 801 searches for corresponding points between the first and second images and generates a parallax map based on the search results” in para. [0072]; here, the corresponding point searcher uses luminance values/hash feature values to determine corresponding points in para. [0077]. These hash feature values are interpreted as equivalent to the claimed evaluation values, wherein the corresponding point correction determination based on the edge of the unpatterned image is interpreted as equivalent to the association. See para. [0068] wherein the process of obtaining the first/second image is equivalent to the first embodiment cited to in the above sections), and the distance value is calculated using a parallax value that was obtained by the association (Matsumoto teaches “ the image processor 11 (corrector) corrects the map 31 obtained in step S202 based on information about an edge 33 obtained in step S205” in para. [0046]; see also “the depth map generator 803 converts parallax information in the parallax map into distance information to generate a depth map” in para. [0072])). Matsumoto fails to teach a first image group and a second image group. However, Masuda teaches a first image group and a second image group (Masuda teaches “combining unit 130 generates composite image C1 of non-illumination base image G11 and illumination base image G11′, and composite image C2 of non-illumination reference image G12 and illumination reference image G12′ (step ST171)” in para. [0141], “whereby a three-dimensional shape of the subject may be measured accurately” as shown in para. [0146], and wherein “non-illumination base image G11 and non-illumination reference image G12 correspond to first measuring images, and illumination base image G11′ and illumination reference image G12′ correspond to second measuring images” as shown in para. [0113] (the embodiment as described in para. [0139]-[0147] differs only from the embodiment described in [0113] “in the configuration of calculation unit 105” as shown in para. [0139]). Here, the first measuring images correspond to the first image group and the second measuring images correspond to the second image group). Matsumoto and Masuda are both considered to be analogous to the claimed invention because they are in the same field of performing three-dimensional measuring through illuminated patterned image integration. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Matsumoto to incorporate the teachings of Masuda and include “a first image group and a second image group”. The motivation for doing so would have been such that “even when a portion having a high contrast or a local characteristic is included in the angle of view, high quality composite images C1, C2 without a moire and the like may be obtained and corresponding points may be obtained accurately, whereby a three-dimensional shape of the subject may be measured accurately”, as suggested by Masuda in para. [0146]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Matsumoto with Masuda to obtain the invention specified in the above claim limitations. Matsumoto and Masuda fail to teach holding the subject to move based on the distance value to the subject that has been calculated by the processing; and manufacturing a predetermined article by performing a predetermined processing on the subject. However, Matsuura teaches holding the subject to move based on the distance value to the subject that has been calculated by the processing (Matsuura teaches “the robot 4 executes processing of the object 2 based on the information about the position and the orientation… for example, the object 2 is held and moved (i.e., translated or rotated) by a hand (a holding unit or an end effector) attached to a leading end of the robot 4” in para. [0050]; see also claim 16); and manufacturing a predetermined article by performing a predetermined processing on the subject (Matsuura teaches “the robot 4, or the system can be used for a manufacturing method of articles”, wherein processing is done on an object in para. [0050]). Matsumoto, Masuda, and Matsuura are all considered to be analogous to the claimed invention because they are in the same field of three-dimensional measuring. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Matsumoto (as modified by Masuda) to incorporate the teachings of Matsuura and include “holding the subject to move based on the distance value to the subject that has been calculated by the processing; and manufacturing a predetermined article by performing a predetermined processing on the subject”. The motivation for doing so would have been “so that the robot 4 can execute processing of gripping the component 2 with high precision”, as suggested by Matsuura in para. [0018], respectively. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Matsumoto and Masuda with Matsuura to obtain the invention specified in claim 31. Allowable Subject Matter Claims 1 and 5-15 are allowed. Claim 22 would be allowable if rewritten to overcome the claim objection and if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter. The best prior art of record is Masuda, Kawanishi (JP WO2020/165976 A1, see English translation), Matsumoto, Matsumoto ‘760, Matsuura, and Grobe. Prior art applied alone or in combination with fails to anticipate or render obvious claims 1, 5-15, and 22. Claim 1 Regarding claim 1, the primary reason for allowance is that the prior art fails to teach or reasonably suggest integrated images or evaluation values to be used during the associationfeature extraction images, in which features have been respectively extracted from a first image that is included in the first image group and a second image that is included in the second image group, for the same region; wherein coefficients to be used when performing the weighting are determined based on luminance values in the feature extraction images, in combination with the other elements of the claim (emphasis added). The closest prior art, Masuda, teaches a three-dimensional measuring apparatus comprising: a projection unit configured to project a pattern onto a subject; a plurality of image capturing units configured to capture images of the subject from mutually different points of view; and a processor or circuit configured to function as: a processing unit configured to calculate a distance value to the subject by performing association using image groups that were captured of the subject by the plurality of image capturing units; wherein the processing unit is further configured to integrate information based on a first image group that was captured by projecting the pattern onto the subject, and information based on a second image group that was captured without projecting the pattern onto the subject, into integrated images or evaluation values but fails to teach integrated images or evaluation values to be used during the associationfeature extraction images, in which features have been respectively extracted from a first image that is included in the first image group and a second image that is included in the second image group, for the same region; wherein coefficients to be used when performing the weighting are determined based on luminance values in the feature extraction images, in combination with the other elements of the claim (emphasis added). Additionally, Kawanishi further teaches generate generating a third image to serve as the integrated image by performing weighting according to the features in the first image and the features in the second image for the same region, and wherein coefficients to be used when performing the weighting are determined based on luminance values but fails to teach integrated images or evaluation values to be used during the associationfeature extraction images, in which features have been respectively extracted from a first image that is included in the first image group and a second image that is included in the second image group, for the same region; wherein coefficients to be used when performing the weighting are determined based on luminance values in the feature extraction images, in combination with the other elements of the claim (emphasis added). Matsumoto further teaches luminance values and the first and second image, but fails to teach integrated images or evaluation values to be used during the associationfeature extraction images, in which features have been respectively extracted from a first image that is included in the first image group and a second image that is included in the second image group, for the same region; wherein coefficients to be used when performing the weighting are determined based on luminance values in the feature extraction images, in combination with the other elements of the claim (emphasis added). Similar analysis can be applied to claims 12 and 13. Claims 5-11 are similarly objected to due to their dependence upon claim 1. Claim 15 Regarding claim 15, the primary reason for allowance is that the prior art fails to teach or reasonably suggest integrated images or evaluation values to be used during the associationfeature extraction images, in which features have been respectively extracted from a first image that is included in the first image group and a second image that is included in the second image group, for the same region; wherein coefficients to be used when performing the weighting are determined based on luminance values in the feature extraction images, in combination with the other elements of the claim (emphasis added). The closest prior art, Masuda, teaches a method for manufacturing an article comprising: projecting a pattern onto a subject; capturing an image of the subject from different viewpoints; processing to calculate a distance value to the subject by performing association using image groups that have been captured of the subject; wherein during the processing, information based on a first image group that was captured by projecting the pattern onto the subject, and information based on a second image group that was captured without projecting the pattern onto the subject, are integrated into integrated images or evaluation values to be used during the association such that features of the images from both the first image group and the second image group remain in a same region of the images, and the distance value is calculated using the integrated images that have been integrated or the evaluation values, but fails to teach integrated images or evaluation values to be used during the associationfeature extraction images, in which features have been respectively extracted from a first image that is included in the first image group and a second image that is included in the second image group, for the same region; wherein coefficients to be used when performing the weighting are determined based on luminance values in the feature extraction images, in combination with the other elements of the claim (emphasis added). Additionally, Kawanishi further teaches wherein, during the processing, according to the features in the first image and the features in the second image for the same region, and wherein coefficients to be used when performing the weighting are determined based on luminance values but fails to teach integrated images or evaluation values to be used during the associationfeature extraction images, in which features have been respectively extracted from a first image that is included in the first image group and a second image that is included in the second image group, for the same region; wherein coefficients to be used when performing the weighting are determined based on luminance values in the feature extraction images, in combination with the other elements of the claim (emphasis added). Matsuura further teaches holding the subject to move based on the distance value to the subject that has been calculated by the processing; and manufacturing a predetermined article by performing a predetermined processing on the subject, but fails to teach integrated images or evaluation values to be used during the associationfeature extraction images, in which features have been respectively extracted from a first image that is included in the first image group and a second image that is included in the second image group, for the same region; wherein coefficients to be used when performing the weighting are determined based on luminance values in the feature extraction images, in combination with the other elements of the claim (emphasis added). Claim 22 Regarding claim 22, the primary reason for allowance is that the prior art fails to teach or reasonably suggest wherein the processing unit is further configured to set regions to be used in the association in the images from the first image group and the second image group based on sizes of features resulting from a pattern of the illuminating light, in combination with the other elements of the claim. The closest prior art, Matsumoto, teaches determining luminance features of a target object by the pattern of illuminating light and calculating the evaluation values to be used in the association, but fails to teach wherein the processing unit is further configured to set regions to be used in the association in the images from the first image group and the second image group based on sizes of features resulting from a pattern of the illuminating light, in combination with the other elements of the claim. Additionally, Masuda teaches setting ranges (regions of the first and second image groups) to be used in an association (composite image generation) based on evaluating local characteristics based on the patterned light, but fails to teach wherein the processing unit is further configured to set regions to be used in the association in the images from the first image group and the second image group based on sizes of features resulting from a pattern of the illuminating light, in combination with the other elements of the claim. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLA G ALLEN whose telephone number is (703)756-5315. The examiner can normally be reached M-F 7:30am - 4:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached on (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Kyla Guan-Ping Tiao Allen/ Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Aug 25, 2023
Application Filed
Sep 23, 2025
Non-Final Rejection — §103
Jan 06, 2026
Examiner Interview Summary
Jan 06, 2026
Applicant Interview (Telephonic)
Jan 27, 2026
Response Filed
Mar 16, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597119
OPERATING METHOD OF ELECTRONIC DEVICE INCLUDING PROCESSOR EXECUTING SEMICONDUCTOR LAYOUT SIMULATION MODULE BASED ON MACHINE LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12588594
SYSTEM AND METHOD FOR IDENTIFYING LENGTHS OF PARTICLES
2y 5m to grant Granted Mar 31, 2026
Patent 12591963
SYSTEM AND METHOD FOR ENHANCING DEFECT DETECTION IN OPTICAL CHARACTERIZATION SYSTEMS USING A DIGITAL FILTER
2y 5m to grant Granted Mar 31, 2026
Patent 12548152
INTRACRANIAL ARTERY STENOSIS DETECTION METHOD AND SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12541833
ASSESSING IMAGE/VIDEO QUALITY USING AN ONLINE MODEL TO APPROXIMATE SUBJECTIVE QUALITY VALUES
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+17.1%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 53 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month