Prosecution Insights
Last updated: April 19, 2026
Application No. 18/599,564

IMAGE PROCESSING DEVICE

Non-Final OA §102
Filed
Mar 08, 2024
Examiner
PHAM, NHUT HUY
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Aisin Corporation
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
42 granted / 53 resolved
+17.2% vs TC avg
Strong +27% interview lift
Without
With
+26.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
31 currently pending
Career history
84
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 53 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The United States Patent & Trademark Office appreciates the application that is submitted by the inventor/assignee. The United States Patent & Trademark Office reviewed the following application and has made the following comments below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/08/2024 is considered and attached. Priority This application claims benefit of foreign priority under 35 U.S.C. 119(a)-(d) of: JP2023-059416, filed in Japan on 03/31/2023. Claim Status Claims 1 and 3-4 are rejected under 35 USC § 102. Claim 2 is objected. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an acquisition unit” in claim 1, line 2. The corresponding structure is disclosed in SPECIFICATION, [0021]; element 28 in FIG. 3. “a region-of-interest setting unit” in claim 1, line 5. The corresponding structure is disclosed in SPECIFICATION, [0021]; element 31 in FIG. 3. “a first setting unit” in claim 1, line 8. The corresponding structure is disclosed in SPECIFICATION, [0027]; element 32 in FIG. 3. “a mode switching unit” in claim 2, line 2. The corresponding structure is disclosed in SPECIFICATION, [0023]; element 30 in FIG. 3. “a second setting unit” in claim 3, line 2. The corresponding structure is disclosed in SPECIFICATION, [0032]; element 34 in FIG. 3. The interpretation of above nuanced terms is a computing system and equivalent thereof. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 1 and 3-4 is rejected under 35 U.S.C. 102 (a)(1)(2) as being anticipated by Yamamoto et al. (US-20210160432-A1, hereinafter Yamamoto) CLAIM 1 Regarding Claim 1, Yamamoto teaches an image processing device (Yamamoto, Abstract: “An image processing device”) comprising: an acquisition unit (Yamamoto, ¶ [0050]: “acquirer acquires images generated by the imagers”) that acquires a plurality of images of which imaging target regions partially overlap with each other (Yamamoto, ¶ [0041 and 0050-0053]: “Image data (images) generated by the imagers includes mutually overlapping regions to prevent missing regions in combined images”), the plurality of images being captured by a plurality of imaging units provided in a vehicle as a surrounding situation of the vehicle (Yamamoto, ¶ [0039]: “the imagers successively shoot the surroundings outside the vehicle including road surfaces on which the vehicle is movable … to output image data.”); a region-of-interest setting unit (Yamamoto, ¶ [0052]: “region-of-interest setter 30”) that sets a plurality of regions of interest included in a plurality of overlapping regions in which two adjacent imaging target regions overlap with each other (Yamamoto, ¶ [0052]: “the region-of-interest setter sets regions of interest 40 (40FL, 40RL, 40RR, and 40FR) in each of the overlapping regions of the imaging regions acquired by the acquirer ... The regions of interest are, for example, rectangular regions having given lengths in the transverse and longitudinal directions of the vehicle”; see FIG. 5); and a first setting unit (Yamamoto, ¶ [0056-0059]: “first setter 32”) that sets a correction value for correcting brightness of the plurality of images based on first target brightness (Yamamoto, ¶ [0056-0059]: “The first setter corrects the luminance of the regions of interest … The fixed-value setter corrects luminance with a correction value for allowing the luminance of each region of interest to match the target luminance”), wherein the first target brightness is a value obtained by adding a first positive value to an average value of brightness of all the regions of interest (Yamamoto, ¶ [0058 and 0091]: “For example, … the left-side region of interest 40FL exhibits luminance of 150 and the right-side region of interest 40FR exhibits luminance of 100 in the vehicular transverse direction, … the average luminance of 125 of the region of interest 40FL and the region of interest 40FR is set to the target luminance. After determining that the brightness of the entire imaging region 36F after corrected with the target luminance is insufficient, … the fixed-value addition setter 32 b adds the luminance adjustment value of 50, which is defined in advance by experiment, to uniformly correct the brightness of the entire imaging region 36F to rise”. The Examiner notes Yamamoto’s “target luminance” with added “adjustment value” corresponds to the invention’s “first target brightness”), and is a value equal to or lower than a first threshold value (Yamamoto, ¶ [0091]: “target luminance may be too low, that is, the corrected image may be not sufficiently bright although having subjected to luminance correction. Thus, after determining that the corrected luminance by the linear interpolation formula 42 is lower than the pre-set luminance (lower limit luminance) and it is to be increased”) higher than the first positive value. (Yamamoto, ¶ [0091]: “after determining that the luminance to be corrected by the linear interpolation formula 42 matches or exceeds the pre-set luminance (lower limit luminance), the fixed-value addition setter 32 b determines no increase in luminance”. Yamamoto teaches “target luminance” with added “adjusted value” must be equal or lower than the “lower limit”, thus, the “lower limit” is higher than both “target luminance” and “adjusted value”) CLAIM 3 Regarding claim 3, Yamamoto teaches the device of Claim 1. In addition, Yamamoto teaches a second setting unit (Yamamoto, ¶ [0049]: “a second setter”; see FIG. 3, element 34), wherein the first setting unit sets, using the first target brightness, a first correction value for correcting brightness of a first region of interest (Yamamoto, ¶ [0083]: “For example, the region of interest 40FL (first region of interest) and the region of interest 40FR (second region of interest) in the imaging region 36F ahead of the vehicle 10 are considered. The region of interest 40FL exhibits luminance (average luminance in the region) of 150 in 256 levels, and the region of interest 40FR exhibits luminance of 100. The target luminance is set to 125 in 256 levels by the fixed-value setter 32 a. In this case, to correct the luminance to the target luminance, the region of interest 40FL requires a correction value of −25 and the region of interest 40FR requires a correction value of +25.”, see FIG. 5) included in a first overlapping region in which a first imaging target region, which is one of a pair of imaging target regions separated from each other with the vehicle interposed therebetween (Yamamoto, ¶ [0051]: “The overlapping region 38 (overlapping region 38FL) between the first imaging region and the second imaging region may be referred to as a first overlapping region.”, see annotated FIG. 4 below), and a second imaging target region, which is one of a pair of imaging target regions adjacent to the first imaging target region, overlap with each other (Yamamoto, ¶ [0051]: “One (e.g., the imaging region 36SL) of the pair of imaging regions 36 (e.g., the imaging region 36SL and the imaging region 36SR) adjacent to the first imaging region may be referred to as a second imaging region”, see annotated FIG. 4 below), and a second correction value for correcting brightness of a second region of interest (Yamamoto, ¶ [0083]: “For example, the region of interest 40FL (first region of interest) and the region of interest 40FR (second region of interest) in the imaging region 36F ahead of the vehicle 10 are considered. The region of interest 40FL exhibits luminance (average luminance in the region) of 150 in 256 levels, and the region of interest 40FR exhibits luminance of 100. The target luminance is set to 125 in 256 levels by the fixed-value setter 32 a. In this case, to correct the luminance to the target luminance, the region of interest 40FL requires a correction value of −25 and the region of interest 40FR requires a correction value of +25.”, see FIG. 5) included in a second overlapping region in which the first imaging target region and a third imaging target region (Yamamoto, ¶ [0051]: “The overlapping region 38 (overlapping region 38FR) between the first imaging region and the third imaging region may be referred to as a second overlapping region. ”, see annotated FIG. 4 below), which is the other of the imaging target regions adjacent to the first imaging target region, overlap with each other (Yamamoto, ¶ [0051]: “ the other of the pair of imaging regions 36 (e.g., the imaging region 36SL and the imaging region 36SR) adjacent to the first imaging region may be referred PNG media_image1.png 1240 1624 media_image1.png Greyscale to as a third imaging region (e.g., the imaging region 36SR)”, see annotated FIG. 4 below), and the second setting unit sets, using the first correction value and the second correction value, an individual correction value for correcting brightness of a region between at least the first region of interest and the second region of interest in the first imaging target region. (Yamamoto, ¶ [0008]: “a second setter that sets, according to the first correction value and the second correction value, an individual correction value for correcting luminance of a region between the first region of interest and the second region of interest.”, ¶ [0061, 0064 and 0082]: “The luminance setter 34 d sets an individual correction value for the luminance of the region between the first region of interest (e.g., the region of interest 40FL) and the second region of interest (e.g., the region of interest 40FR) by the linear interpolation formula” ) CLAIM 4 Regarding claim 4, Yamamoto teaches the device of Claim 3. In addition, Yamamoto teaches the second setting unit calculates the individual correction value based on an interpolation expression (Yamamoto, ¶ [0064 and 0072]: “luminance setter 34 d corrects (sets) the luminance of the region between the region of interest 40FL and the region of interest 40FR with the correction value (individual correction value) calculated by the generated linear interpolation formula 42F.”) for linear interpolation between the first correction value and the second correction value (Yamamoto, ¶ [0061]: “the linear interpolator 34 a generates, for example, a straight-line interpolation formula (straight line connecting the first correction value and the second correction value) for linear interpolation.”), and a slope of the interpolation expression is adjustable. (Yamamoto, ¶ [0080-0081]: “the gradient setter 34 b corrects the gradient of the linear interpolation formula 42 generated by the linear interpolator”) Allowable Subject Matter Claim 2 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The closest prior arts for Claim 2 are: Yamamoto. (US-20210160432-A1). Yamamoto teaches an image processing system that acquire, modify and display images of surrounding views of a vehicle to driver. The images are combined into a displaying bird-eye view image. The system includes correcting brightness of image data to improve visibility of the displaying content. Yamamoto also discloses two modes of the display, normal mode and dark mode, and a method to switch between two said modes. Takahashi et al. (US20240129638A1) teaches A vehicle image display system includes: a display device for displaying a digital image which is an image based on image data, a dark condition determination unit for determining whether a dark condition is satisfied with respect to the brightness of a surrounding environment of the vehicle, a luminance determination unit for determining whether the image data includes a high luminance pixel, a pixel value determination unit for determining whether a pixel having a pixel value equal to or lower than a second threshold value is not included in a peripheral pixel, and an image adjustment unit for performing a first brightness adjustment process for reducing the brightness of the image data when the dark condition is satisfied and the image data includes a high luminance pixel and the peripheral pixel does not include a pixel having a pixel value equal to or lower than the second threshold value. While both Yamamoto and Takahashi teach system to correcting brightness of image content on a vehicle based on two displaying modes. Neither Yamamoto, or Takahashi, nor the combination teaches “a mode switching unit that switches between a first mode and a second mode based on information on lightness, wherein the first setting unit sets the correction value using the first target brightness in the first mode, and sets the correction value using second target brightness equal to or higher than the first target brightness in the second mode, and the second target brightness is the first threshold value.” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHUT HUY (JEREMY) PHAM whose telephone number is (703)756-5797. The examiner can normally be reached Mo - Fr. 8:30am - 6pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O'Neal Mistry can be reached on (313)446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NHUT HUY PHAM/Examiner, Art Unit 2674 /Ross Varndell/Primary Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Mar 08, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598397
DIRT DETECTION METHOD AND DEVICE FOR CAMERA COVER
2y 5m to grant Granted Apr 07, 2026
Patent 12598074
FACIAL RECOGNITION METHOD AND APPARATUS, DEVICE, AND MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597254
TRACKING OPERATING ROOM PHASE FROM CAPTURED VIDEO OF THE OPERATING ROOM
2y 5m to grant Granted Apr 07, 2026
Patent 12592087
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579622
METHOD AND APPARATUS FOR PROCESSING IMAGE SIGNAL, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 53 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month