Prosecution Insights
Last updated: April 19, 2026
Application No. 17/822,935

INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM

Final Rejection §103
Filed
Aug 29, 2022
Examiner
GARCIA, PAULO ANDRES
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
4 (Final)
83%
Grant Probability
Favorable
5-6
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
34 granted / 41 resolved
+20.9% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
13 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 41 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicants 2. The amendment filled 09/25/2025 in response to the Non-Final Office Action mailed 08/25/2025 has been entered. 3. Claims 1-20 are pending. 4. Limitations appearing inside {} are intended to indicate the limitations not taught by said prior art(s)/combinations. Response to Arguments 5. Applicant’s arguments, see pg. 1-4, filed 09/25/2025, with respect to the 103 rejections of claims 1-2, 4-8, 10-14, and 16-19 have been fully considered and are not persuasive. The examiner agrees with the applicant that Yasuda and Tanaka do not specifically disclose the amended claim language. However, the examiner disagrees with the applicant that the amended claim language is not disclosed by Kusakabe. Specifically, the examiner notes that Kusakabe teaches wherein a start time of the human body region registered in the exclusion information is updated ([par. 0046, ln. 1-19] “Next, in step S601, the false detection determination unit 302a obtains the timing when the image frame received in step S402 was captured… the capturing time of each frame of the moving image is stored in a header portion or the like of the image data… because the capturing time is obtained when an image frame is received, step S601 can be treated as being included in step S402. In step S602, the false detection determination unit 302a determines whether the capturing time obtained in step S601 is during the false detection determination time period obtained in step S600. If the capturing time is determined to be during the false detection determination time period, in step S603, the false detection determination unit 302a registers the person region detected in step S403 in the false detection list 351 as a false detection region for which a false detection has occurred. The processing of step S411 and step S412 is then performed. The processing of step S411 and step S412 is the same as that in FIG. 4.”, [par. 0049, ln. 1-9] “Note that the time period obtained in in step S600 may be the designation of an initiation time and an execution period instead of an initiation time and an ending time. Alternatively, an initiation time and a number of image frames on which to perform processing may be used. Alternatively, configuration may be taken to have a period or a number of image frames for which to perform processing as a predetermined value that is decided in advance, and obtain only an initiation time.”). The examiner specifically notes that this updating includes a start time of the human body region because S600 uses an initiation time and execution period, and thus at least the “start time” of the human body information would be required to be updated. Furthermore, the examiner notes Kusakabe teaches a determination unit configured to determine whether the human body region information exists in the exclusion information ([Fig. 5, see 302a], [par. 0042, ln. 9-22] “…The false detection determination unit 302a registers a person region detected by the person region detection unit 301 in the false detection determination time period to a false detection list 351… The false detection determination unit 302a refers to the false detection list 351 to perform a false detection determination for a person region detected from an image obtained at a time other than a false detection determination time period obtained by the time period obtaining unit 310.”, [par. 0048, ln. 8-35] “In step S604, the false detection determination unit 302a determines whether the person region selected in step S408 is a false detection in accordance with whether the person region is near a position of a false detection region recorded in the false detection list 351. If the detection position of the selected person region is near the detection position of any false detection region recorded in the false detection list 351, the person region is determined to be a false detection. Note that whether the detection position is near a false detection region can be determined in accordance with whether a distance between the detection position and the false detection region exceeds a predetermined value, for example. In addition, as a determination condition for whether or not there is a false detection, configuration may be such that the size of a detected region is taken as a condition, in addition to a distance condition. For example, that a difference (ratio) between the size of a detected person region and the size of a region recorded in the false detection list 351 is within a predetermined range may be used as a condition. When the selected person region is determined to be a false detection in step S604, the processing control unit 303 does not cause the person region processing unit 304 to process the person region. Accordingly, the processing simply returns to step S407.”). Therefore, Kusakabe teaches wherein a determination unit is configured to determine whether the human body region information (e.g., size, location, ratio etc.) exists in exclusion information, and furthermore, wherein, in the case where the human body region exists in the exclusion information, start time of the human body region registered in the exclusion information is updated. Therefore, the rejections of claims 1-2, 4-8, 10-14, and 16-19 are maintained in view of a combination of Tanaka, Kusakabe, and Yasuda. Claim Objections 6. Claims 1, 7, and 13 are objected to because of the following informalities: In ln. 16-17, claim 1 recites “…information, start time of human body region registered…”, consider correction to “…information, a start time of the human body region registered…”. Claims 7 and 13 recite analogous informalities to claim 1 in ln. 12-13 and 13-14 respectively. The examiner further notes that Claim 1 recites “…start time…” as opposed to “…exclusion start time…” as recited in claims 7 and 13. Appropriate correction is required. Claim Rejections - 35 USC § 103 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 9. Claims 1-2, 4-8, 10-14, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2007/0147701 to Tanaka (hereinafter Tanaka) in view of U.S. Publication No. 2018/0165515 to Kusakabe (hereinafter Kusakabe), and further in view of WO 2021/001943 to Yasuda (hereinafter Yasuda). 10. Regarding Claim 1, Regarding Claim 1, Tanaka discloses an apparatus comprising: one or more processors; and at least one memory coupled to the one or more processors storing instruction that, when executed by the one or more processors, cause the one or more processors to function as: ([Fig. 1], [par. 0026, ln. 1-13] “The digital camera… an image signal processing circuit 12,… an image display unit 15,… the CPU 19, an automatic focus (AF) detection circuit 20, an automatic exposure & automatic white balance (AE&AWB) detection circuit 21, a memory 22, a video random access memory (VRAM) 23, a media controller 24, a recording medium 25, and a face detection circuit 26…”): an acquisition unit configured to acquire an image ([Fig. 1], [par. 0026, ln. 1-13]); a detection unit configured to detect a first region that may include human body on the acquired image ([Fig. 2], [par. 0036, ln. 1-5] “Referring now to FIG. 2, operation of the face detection circuit 26 is described. As shown in FIG. 2, the face detection circuit 26 detects the facial region A and the upper body region C contained in an image signal which is output from the image sensor 4.”); {a determination unit configured to determine whether human body region information exists in exclusion information}; an adjustment unit configured to perform first image quality adjustment on the acquired image based on an image of the detected first region in a case where the first region is detected ([par. 0028, ln. 1-5] “The image signal processing circuit 12 performs image signal processing such as gamma correction, edge strengthen, and white balance control on an input digital image signal. The CPU 19 sets the parameters for these image signal processings.”, [par. 0038, ln. 1-8] “Since the AE&AWB detection circuit 21 controls image exposure, image white balance and light emission intensity and duration of the flash… 21 generates an exposure control signal and a white balance control signal in response to an image signal, which is output from the image sensor 4, and outputs the exposure control signal and the white balance control signal to the CPU 19.”, [par. 0039, ln. 1-13] “The exposure control signal may be a brightness appraisal value that represents brightness of an image. When an image signal is input… 21 determines an average brightness of facial region A (average_A) that was detected by the face detection circuit 26, an average brightness of region B (average_B) that was detected by… 26, and an average brightness of the upper body region C (average_C) that was detected by… 26. The CPU 19 in cooperation with… 21 then calculates a brightness appraisal value or exposure control signal according to Equation 1. ( n + a v e r a g e _ A + a v e r a g e _ B + k * a v e r a g e _ C ) / ( n + 1 + k ) ”, [par. 0067, ln. 13-21] “…21 outputs an exposure control signal and a white balance control signal on the basis of an image received by the image sensor 4 and an output result from the face detection circuit 26…the CPU 19 performs … exposure control that adjusts a gain of the CDSAMP circuit 9, and white balance control that adjusts gains B and R of the image signal processing circuit 12.”), {wherein, in the case where the human body region information exists in the exclusion information, start time of human body region registered in the exclusion information is updated}, wherein the detection unit further detects the first region again or detects a second region including a face on the adjusted image ([par. 0036, ln. 1-5], [Fig. 6], [par. 0067, ln. 12-17] “…FIG. 6, in the continuous shooting mode, the AE&AWB detection circuit 21 outputs an exposure …and a white balance control signal on the basis of an image received by the image sensor 4 and an output result from the face detection circuit 26, even after shooting first image…”), a storage unit configured to store information about the first region detected before the first image quality adjustment {as the exclusion information in a case where neither the first region nor the second region are detected on the acquired image after the first image quality adjustment} (See [par. 0067, ln. 13-21] for “before the first image quality adjustment”, [par. 0033, ln. 1-5] “The memory 22 may include a read only memory (ROM), which is a non-volatile memory that stores a program for operating the CPU 19, and/or a random access memory (RAM), which is a volatile memory used as a work memory when the CPU 19 operates.”), wherein, in a case where the second region is detected, the adjustment unit performs the first image quality adjustment or second image quality adjustment on the adjusted image based on an image of the second region ([par. 0038, ln. 1-8], [par. 0039, ln. 1-13], [par. 0046, ln. 1-3] “FIG. 4 is a flowchart of a method of calculating the gain B and the gain R using the AE&AWB detection circuit 21…”, [par 0047, ln. 1-9] “A value called Lab_F is calculated as L*a*b, where L is the average value of red components of all of the pixels, a is the average value of green components of all of the pixels, and b is the average value of blue components of all of the pixels in the facial region A, which is detected by the face detection circuit 26. Another value called Lab_B is calculated in a similar manner for the upper body region C and another value called Lab_A is calculated in a similar manner for the entire image.”, [par. 0048, ln. 1-10] “In step S1, the AE&AWB detection circuit 21 calculates a before-correction gain R (Gr) and a before-correction gain B (Gb) that satisfy Equation 2 (S1), that is, a condition where ratios Rs:Bs:Gs for red, blue, and green components Rs, Bs, and Gs constituting a skin colour equals to ratios Rf.times.Gr:Bf.times.Gb:Gf wherein a red component Rf is multiplied with the before-correction gain R (Gr), and a blue component Bf is multiplied with the before-correction gain B (Gb). R s : B s : G s = ( R ƒ x G r ) : ( B ƒ x G b ) : G ƒ Equation 2”), a {comparison unit} configured to compare detection information about the first region detected on the acquired image {and the exclusion information} ([par. 0068, ln. 24-27] “…Light intensity control, exposure control, and white balance control are performed in this manner for each of the images up to the (x-1)th image where a face is being photographed.”, [par. 0069, ln. 1-18] “Next, if it is determined at a point in time during photographing that a face is not contained in an image photographed (e.g., the xth image)… the CPU 19 inputs an image received by the image sensor 4 to the face detection circuit 26, but… 26 fails to detect a face for the input image and outputs such a result to the CPU 19. Next, in… S12, whether a face detection has been successful is determined. If… failed… S14 is performed. In… S14, whether a change in an image between images is small is determined. The judgment is performed using Equation 3 as set forth below: a b s ( a v e [ y ] - a v e [ y - 1 ] ) < H ,   a b s ( 1 - ( ( R [ y ] × G [ y - 1 ] ) / ( G [ y ] × R [ y - 1 ] ) ) < I ,   a b s ( 1 - ( ( R [ y ] × G [ y - 1 ] ) / ( G [ y ] × R [ y - 1 ] ) ) < J Equation (3) where H, I and J are small positive integers, abs( ) is a function for calculating an absolute value, ave[ ] is a brightness average value of an image for an image, and R[ ], G[ ], and B[ ] are average values of red, green, and blue components for an image, respectively.”), wherein the adjustment unit determines whether to perform the first image quality adjustment {in a case where the detection information is not included in the exclusion information} ([par. 0070, ln. 1-18] “Since the change in an image between images is small, Equation 3 is satisfied, that is, a condition of operation S14 is satisfied, and operation S15 is performed… light intensity control, exposure control, and white balance control are performed under the same conditions of a previous image ((x-1)th image). Next, since a face is not photographed and change in an image is small even after an (x+1)th image so that a condition of the operation S12 is not satisfied and a condition of the operation S14 is satisfied such that operation S15 is performed. In operation S15, light intensity control, exposure control, and white balance control are performed under the same conditions of a previous image. Since shooting is performed under the same light intensity control, the same exposure control, and the same white balance control as those for a last image where a face is photographed even when a face is not photographed in the case where change in an image is small, stable exposure and colour are achieved between images.”). Tanaka does not specifically disclose exclusion information, a determination unit configured to determine whether human body region information exists in exclusion information, updating a start time in the case where the human body information is included in exclusion information, or wherein, in the case where the human body is not detected, the adjustment unit brings image quality setting to a state before the first quality adjustment. Tanaka does not specifically disclose a comparison unit, however, one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize that the processing performed for S12-S16 of Tanaka is analogous to a comparison of regions, and furthermore that such a comparison could be performed by a separate unit. However, Kusakabe teaches a determination unit configured to determine whether human body information exits in exclusion information ([Fig. 5, see 302a], [par. 0042, ln. 9-22] “The false detection determination unit 302a registers a person region detected by the person region detection unit 301 in the false detection determination time period to a false detection list 351. While the list 350 described above by FIG. 3 is valid for one image frame, the false detection list 351 is maintained across a predetermined period (for example, until a subsequent false detection determination time period is initiated) in which a region is determined to be a false detection. The false detection determination unit 302a refers to the false detection list 351 to perform a false detection determination for a person region detected from an image obtained at a time other than a false detection determination time period obtained by the time period obtaining unit 310.”, [par. 0048, ln. 8-35] “In step S604, the false detection determination unit 302a determines whether the person region selected in step S408 is a false detection in accordance with whether the person region is near a position of a false detection region recorded in the false detection list 351. If the detection position of the selected person region is near the detection position of any false detection region recorded in the false detection list 351, the person region is determined to be a false detection. Note that whether the detection position is near a false detection region can be determined in accordance with whether a distance between the detection position and the false detection region exceeds a predetermined value, for example. In addition, as a determination condition for whether or not there is a false detection, configuration may be such that the size of a detected region is taken as a condition, in addition to a distance condition. For example, that a difference (ratio) between the size of a detected person region and the size of a region recorded in the false detection list 351 is within a predetermined range may be used as a condition. When the selected person region is determined to be a false detection in step S604, the processing control unit 303 does not cause the person region processing unit 304 to process the person region. Accordingly, the processing simply returns to step S407.”), wherein, in the case where the human body region information exists in the exclusion information, start time of human body region registered in the exclusion information is updated ([par. 0046, ln. 1-19] “Next, in step S601, the false detection determination unit 302a obtains the timing when the image frame received in step S402 was captured… the capturing time of each frame of the moving image is stored in a header portion or the like of the image data… because the capturing time is obtained when an image frame is received, step S601 can be treated as being included in step S402. In step S602, the false detection determination unit 302a determines whether the capturing time obtained in step S601 is during the false detection determination time period obtained in step S600. If the capturing time is determined to be during the false detection determination time period, in step S603, the false detection determination unit 302a registers the person region detected in step S403 in the false detection list 351 as a false detection region for which a false detection has occurred. The processing of step S411 and step S412 is then performed. The processing of step S411 and step S412 is the same as that in FIG. 4.”, [par. 0049, ln. 1-9] “Note that the time period obtained in in step S600 may be the designation of an initiation time and an execution period instead of an initiation time and an ending time. Alternatively, an initiation time and a number of image frames on which to perform processing may be used. Alternatively, configuration may be taken to have a period or a number of image frames for which to perform processing as a predetermined value that is decided in advance, and obtain only an initiation time.”), and a comparison unit to compare detection information of a first region with exclusion information ([par. 0042, ln. 9-22], [par. 0048, ln. 8-35]), and a determining whether to perform processing in a case where in a case where the detection information is not included in the exclusion information ([par. 0029, ln. 10-20] “… step S409, the processing control unit 303 refers to a false detection determination result for the person region selected in step S408 to determine whether the selected person region is a false detection. When the person region is determined not to be a false detection, in step S410, the processing control unit 303 causes the person region processing unit 304 to perform processing with respect to the person region. Meanwhile, when the person region is determined to be a false detection in step S408, the processing returns to step S407.”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize Tanaka and Kusakabe as within the same field of image processing of images including humans, and as analogous to the claimed invention. The motivation for combining the exclusion information, determination unit, and comparison unit of Kusakabe with the apparatus of Tanaka would have been obvious to one of ordinary skill in the art, in that it would reduce false detections, and as further taught in Kusakabe, to reduce unnecessary processing of falsely detected regions ([par. 0056, ln. 1-9] “As described above, it is possible to suppress false detections by a unified standard, even when performance of the person region detection processing of step S403 differs for each camera or application due to the processing being distributed over a plurality of apparatuses, or when differing targets are captured. Accordingly, it is possible to suppress variation between cameras of the results of step S410, which is processing that takes advantage of a detected person region.”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the apparatus of Tanaka with the exclusion information, determination unit, and comparison unit of Kusakabe through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Tanaka with the exclusion information, determination unit, and comparison unit of Kusakabe. However, a combination of Tanaka and Kusakabe does not specifically disclose wherein, in the case where the human body is not detected, the adjustment unit brings image quality setting to a state before the first quality adjustment. Yasuda teaches wherein in the case where a human face is not detected, the adjustment unit brings image quality setting to a state before the first quality adjustment ([pg. 6, par. 8-10, ln. 1-6] “… control unit 106 controls to return the optical setting of the image pickup apparatus 20 to an initial value when a recontrol instruction is output from the recontrol instruction unit 108.. .When the recontrol instruction is output, the exposure control unit 1061 performs exposure control by returning the exposure time or the exposure time and the lighting time to the initial values and setting the default state. When the recontrol instruction is output, the image processing unit 1062 returns the gain value to the initial value and adjusts the gain to the default state”, [pg. 7, par. 1, ln. 1-2] “The erroneous detection determination unit 107 determines whether or not the face detection unit 102 erroneously detects the driver's face, that is, erroneous detection determination, based on the result of the control performed by the optical setting control unit 106.”, [pg. 10, par. 9, ln. 1-5] “…When the erroneous detection determination unit 107 determines that the information about the image pickup device 20 satisfies the face erroneous detection condition, it is assumed that the face detection unit 102 erroneously detects the driver's face, and an area reduction instruction for reducing the face detection area is given… unit 107 notifies the optical setting control unit 106 of the recontrol requirement for executing the control for returning the exposure time, the lighting time, or the gain value to the initial value, the recontrol instruction unit 108.”, [pg. 10, par. 11, ln. 1 to pg. 11, par. 1, ln. 4] “… false detection determination unit 107 outputs a recontrol required notification, the recontrol instruction unit 108 causes the optical setting control unit 106 to control the optical setting of the image pickup apparatus 20 to return to an appropriate value…the appropriate value of the optical setting of the imaging device 20 is the initial value of the optical setting of the imaging device 20… causes the exposure control unit 1061 to perform exposure control for returning the exposure time to the initial value based on the re-control required notification”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize Tanaka, Kusakabe, and Yasuda as in the same field of image quality adjustment of images including humans, and as analogous to the claimed invention. Specifically, one of ordinary skill in the art would have incorporated the initial state image quality reversal of Yasuda to operate in the case where a human body is not detected, along with the human face. The motivation for this would have been obvious to one of ordinary skill in the art, and is disclosed in Yasuda, in that an image quality adjustment on an incorrect region can result in degradation of the image, and that reverting the image quality to an initial state allows for better requisition of the regions and subsequent image quality adjustments ([pg. 7, par. 11, ln. 1, to pg. 8, par. 1, ln. 9] “…when the face detection unit 102 erroneously detects the face of the occupant in the rear seat as the driver's face, the second face 202b is darkly imaged, and therefore the second face area 203b calculated by the brightness calculation unit 104… control unit 106 controls the optical setting according to the second face area 203b, which is the face area of the occupant… When the optical setting is controlled according to the second face region 203b, the first face region 203a, which should be originally detected as the driver's face region, is originally imaged brighter than the second face region 203b. Because it is a region, the image will be imaged brighter than necessary… when the exposure control unit 1061 of the optical setting control unit 106 performs exposure control, the average brightness of the pixels in the first face region 203a is assumed to be the average brightness of the pixels in the driver's face region. It is much larger than the average brightness… …when the image processing unit 1062 adjusts the gain, the gain value of the first face region 203a increases, so that so-called “overexposure” occurs in the first face region. FIG. 5 is a diagram showing an image in which overexposure occurs around the driver's face region on the image acquired from the image pickup apparatus 20 in the first embodiment.”). One of ordinary skill in the art would recognize the human face as part of the human body, and it would have been obvious to further apply the initial state image reversal of Yasuda to the human body as detected in the combination of Tanaka and Kusakabe. One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the apparatus of Tanaka with the exclusion information, determination unit, and comparison unit of Kusakabe, and further combined the apparatus of the combination of Tanaka and Kusakabe with the initial state image quality reversal of Yasuda applied to when the human body is not detected, through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Tanaka with the exclusion information. determination unit, and comparison unit of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 1. 11. Regarding Claim 2, a combination of Tanaka, Kusakabe, and Yasuda teaches the apparatus of claim 1. Kusakabe teaches wherein the exclusion information includes information about the first region and a current time ([par. 0050, ln. 1-7] “…configuration may be taken to use the current time in the image processing apparatus 200 instead of the capturing time which is obtained in step S601. In this case, a region detected by the person region detection unit 301 while the current time is during the false detection determination time period is recorded in the false detection list 351 as a false detection region.”), and {wherein the storage unit deletes the exclusion information when a predetermined time has passed since the current time}. Tanaka and Kusakabe do not specifically teach wherein the storage unit deletes exclusion information when a predetermined time has passed since the current time, however, one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize this as common practice within the field of image processing, since it saves storage space and allows for continuous operation of an image processing system on a stream of images without running into storage limitations. One of ordinary skill in the art, in combining the apparatus and regions of Tanaka with the exclusion information of Kusakabe, would have further included the information about a first region and a current time, so that after a predetermined period of time had passed, the information about a region would be deleted from the exclusion information, thus allowing for renewed storage space. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Tanaka with the exclusion information, determination unit, and comparison unit of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 2. 12. Regarding Claim 4, a combination of Tanaka, Kusakabe, and Yasuda teaches the apparatus of claim 1. Tanaka discloses wherein the comparison unit calculates a difference between the detection information about the region detected before the first image quality adjustment and the {exclusion information} ([par. 0038, ln. 1-8], [par. 0039, ln. 1-13], [par. 0068, ln. 24-27], [par. 0069, ln. 1-18]), and wherein the adjustment unit determines that the detection information is included in the exclusion information in a case where the difference is a predetermined threshold value or lower ([Fig. 7, S14], [par. 0070, ln. 1-18]). Tanaka does not specifically disclose that the quality adjustment is not performed or exclusion information. However, one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize that there is no need to perform processing on exclusion information, since it constitutes a false detection as described by Kusakabe ([par. 0042, ln. 9-22]). Furthermore, one of ordinary skill in the art would likewise recognize that Kusakabe discloses calculating a difference between the detection information about the region detected and the exclusion information, wherein the detection information is included in the exclusion information in a case where the difference is a predetermined threshold value or lower ([par. 0048, ln. 12-27], [par. 0029, ln. 10-20]). Thus, one of ordinary skill in the art, in combining the apparatus of Tanaka with the exclusion information of Kusakabe, would not perform the quality adjustment should a difference between a region detected before the first image quality adjustment and exclusion information be a predetermined threshold value or lower. The motivation remains the same as described in claim 1, and further, it would have been obvious to one of ordinary skill in the art in that the region is likely a false detection based on the threshold comparison, thus saving processing load by not performing the image quality adjustment on a false detection. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Tanaka with the exclusion information, determination unit, and comparison unit of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 4. 13. Regarding Claim 5, a combination of Tanaka, Kusakabe, and Yasuda teaches the apparatus of claim 1. Tanaka discloses wherein the detection information and the exclusion information include at least one of information about a position, a size, an edge, or a texture of the first region ([par. 0036, ln. 9-19] “First, the face detection circuit 26 extracts a skin colour {i.e., texture} region from the image signal… 26 performs outline extraction {i.e., edge} on the extracted region on the basis of a brightness change, and checks whether both eyes and a mouth exist on locations that supposedly correspond to both eyes and the mouth. When both eyes and the mouth are determined to exist, a region that corresponds to the locations is detected as the facial region A, and up and down locations {i.e., position, size} of the facial region are determined using positions relative to both eyes and the mouth.”, [par. 0048, ln. 1-10], [par. 0049, ln. 20-21] “…it is then determined in step S4 whether a target in an image has white clothes or light clothes {i.e., texture}.”, [par. 0064, ln. 1-5] “The face detection circuit 26 that has received the digital image signal from the CPU 19 detects positions of a facial region A (FIG. 2) and an upper body region C (FIG. 2) from the digital image signal, and outputs the detected positions to the CPU 19.”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Tanaka with the exclusion information, determination unit, and comparison unit of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 5. 14. Regarding Claim 6, a combination of Tanaka, Kusakabe, and Yasuda teaches the apparatus of claim 1. Tanaka discloses wherein the first image quality adjustment and the second image quality adjustment include adjustment relating to exposure ([par. 0038, ln. 1-8], [par. 0039, ln. 1-13], [par. 0046, ln. 1-3], [par 0047, ln. 1-9], [par. 0048, ln. 1-10]). One of ordinary skill in the art would recognize gain as related to exposure, specifically, in that gain and exposure both relate to control of light signals from an image. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Tanaka with the exclusion information, determination unit, and comparison unit of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 6. 15. Regarding Claim 7, Tanaka discloses “a method for controlling an apparatus, the method comprising:” ([Claim 15, 1-3] “A method for compensating exposure of a photographic image of a human subject captured by a digital camera with a face detecting means, the method comprising:”), wherein the remainder of the claim is analogous to claim 1. Arguments analogous to claim 1 are further applicable to claim 7. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Tanaka with the exclusion information, determination, and comparison of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 7. 16. Regarding Claim 8, a combination of Tanaka, Kusakabe, and Yasuda teaches the method of claim 7. Arguments analogous to those made in claim 2 are further applicable to claim 8. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Tanaka with the exclusion information, determination, and comparison of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 8. 17. Regarding Claim 10, a combination of Tanaka, Kusakabe, and Yasuda teaches the method of claim 7. Arguments analogous to those made in claim 4 are further applicable to claim 10. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Tanaka with the exclusion information, determination, and comparison of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 10. 18. Regarding Claim 11, a combination of Tanaka, Kusakabe, and Yasuda teaches the method of claim 7. Arguments analogous to those made in claim 5 are further applicable to claim 11. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Tanaka with the exclusion information, determination, and comparison of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 11. 19. Regarding Claim 12, a combination of Tanaka, Kusakabe, and Yasuda teaches the method of claim 7. Arguments analogous to those made in claim 6 are further applicable to claim 12. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Tanaka with the exclusion information, determination, and comparison of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 12. 20. Regarding Claim 13, Tanaka discloses “a non-transitory computer readable storage medium storing a program that causes a computer to execute a method, the method comprising:” ([par. 0033, ln. 1-5]), wherein the remainder of the claim is analogous to claim 1. Arguments analogous to claim 1 are further applicable to claim 13. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the non-transitory computer readable storage medium of Tanaka with the exclusion information, determination, and comparison of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 13. 21. Regarding Claim 14, a combination of Tanaka, Kusakabe, and Yasuda teaches the non-transitory computer readable storage medium of claim 13. Arguments analogous to those made in claim 2 are further applicable to claim 14. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the non-transitory computer readable storage medium of Tanaka with the exclusion information, determination, and comparison of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 14. 22. Regarding Claim 16, a combination of Tanaka, Kusakabe, and Yasuda teaches the non-transitory computer readable storage medium of claim 13. Arguments analogous to those made in claim 4 are further applicable to claim 16. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the non-transitory computer readable storage medium of Tanaka with the exclusion information, determination, and comparison of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 16. 23. Regarding Claim 17, a combination of Tanaka, Kusakabe, and Yasuda teaches the non-transitory computer readable storage medium of claim 13. Arguments analogous to those made in claim 5 are further applicable to claim 17. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the non-transitory computer readable storage medium of Tanaka with the exclusion information, determination, and comparison of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 17. 24. Regarding Claim 18, a combination of Tanaka, Kusakabe, and Yasuda teaches the non-transitory computer readable storage medium of claim 13. Arguments analogous to those made in claim 6 are further applicable to claim 18. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the non-transitory computer readable storage medium of Tanaka with the exclusion information, determination, and comparison of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 18. 25. Regarding Claim 19, a combination of Tanaka, Kusakabe, and Yasuda teaches the apparatus of claim 1. Kusakabe discloses wherein the comparison unit calculates a difference between the detection information and the exclusion information ([par. 0048, ln. 12-27]), and wherein the adjustment unit determines that the detection information is not included in the exclusion information in a cases where the difference is not a predetermined threshold value or lower ([par. 0048, ln. 12-27], [par. 0029, ln. 10-20]). One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the threshold value for exclusion in Kusakabe with the apparatus of Tanaka through known means, with no change to their respective function, and the combination would have yielded nothing more than predictable results. The motivation remains analogous to claim 1. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Tanaka with exclusion information, determination, and comparison of Kusakabe and the initial state image quality reversal of Yasuda to obtain the invention as specified in claim 19. 26. Regarding Claim 20, a combination of Tanaka, Kusakabe, and Yasuda teaches the apparatus of claim 1. Tanaka and Kusakabe do not specifically disclose wherein, in a case where neither the first region nor the second region are detected on the acquired image after the first image quality adjustment, the adjustment unit further reverts the acquired image to a state before the first quality adjustment is performed. However, Yasuda teaches wherein, in a case where neither the first region nor the second region are detected on the acquired image after the first image quality adjustment, reverting an image to a state before a first image quality adjustment ([pg. 6, par. 8-10, ln. 1-6], [pg. 7, par. 1, ln. 1-2], [pg. 10, par. 9, ln. 1-5], [pg. 10, par. 11, ln. 1 to pg. 11, par. 1, ln. 4]). Specifically, arguments analogous to claim 1 are further applicable to claim 20, in that the human face is part of the human body, and furthermore, that one of ordinary skill in the art would have applied the initial state image quality reversal of Yasuda to the first region that possibly detects the human body for the same reason it is applied to the
Read full office action

Prosecution Timeline

Aug 29, 2022
Application Filed
Nov 07, 2024
Non-Final Rejection — §103
Feb 12, 2025
Response Filed
Apr 18, 2025
Final Rejection — §103
Jun 24, 2025
Request for Continued Examination
Jun 26, 2025
Response after Non-Final Action
Aug 21, 2025
Non-Final Rejection — §103
Sep 25, 2025
Response Filed
Nov 24, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602823
RE-LOCALIZATION OF ROBOT
2y 5m to grant Granted Apr 14, 2026
Patent 12597280
IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12597161
SYSTEMS AND METHODS FOR OBJECT TRACKING AND LOCATION PREDICTION
2y 5m to grant Granted Apr 07, 2026
Patent 12586400
IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12586176
SYSTEMS AND METHODS FOR PREDICTING AN INCOMING ROTATIONAL BALANCE OF AN UNFINISHED WORKPIECE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+17.2%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 41 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month