Prosecution Insights
Last updated: April 19, 2026
Application No. 18/624,248

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM

Non-Final OA §101§103
Filed
Apr 02, 2024
Examiner
THOMAS, MIA M
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Shinshu University
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
606 granted / 703 resolved
+24.2% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
12 currently pending
Career history
715
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
43.0%
+3.0% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 703 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is responsive to communications filed on 04/02/2024. Claims 1-15 are pending in the instant application. Claims 1, 14 and 15 are independent. An Office Action on the merits follows here below. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in JP 2023-069351 on 04/20/2023. It is noted, however, that applicant has not filed a certified copy of the JP 2023-069351 application as required by 37 CFR 1.55. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Appropriate correction is required. Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/02/2024 and 12/18/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “an acquisition unit”; “an extraction unit”; “a correction unit”; “and a binarization processing unit” at claim 1; “a determination unit” at claim 2; “a filter generation unit”; “a division unit”; “and a multiplication unit” at claim 3; “a cortical opacity detection unit” at claim 4; “an area specifying unit”; “an opacity center acquisition unit”; “an approximation line acquisition unit”; “a cortical opacity determination unit” at claim 5; “an opacity size determination unit” at claim 6; “a posterior subcapsular opacity detection unit” at claim 7; “an opacity center acquisition unit”; “a posterior subcapsular opacity determination unit” at claim 8; “an opacity size determination unit” at claim 9; “an opacity degree determination unit” at claim 10; “an area specifying unit”; “an opacity extraction unit” at claim 11; “a mask image generation unit” at claim 12. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitation(s) interpreted under 35 U.S.C. 112(f) applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recites sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 35 U.S.C. 101 requires that a claimed invention must fall within one of the four eligible categories of invention (i.e. process, machine, manufacture, or composition of matter) and must not be directed to subject matter encompassing a judicially recognized exception as interpreted by the courts. MPEP 2106. The four eligible categories of invention include: (1) process which is an act, or a series of acts or steps, (2) machine which is an concrete thing, consisting of parts, or of certain devices and combination of devices, (3) manufacture which is an article produced from raw or prepared materials by giving to these materials new forms, qualities, properties, or combinations, whether by hand labor or by machinery, and (4) composition of matter which is all compositions of two or more substances and all composite articles, whether they be the results of chemical union, or of mechanical mixture, or whether they be gases, fluids, powders or solids. MPEP 2106(I). Claim 15 is rejected under 35 U.S.C. 101 as not falling within one of the four statutory categories of invention because the broadest reasonable interpretation of the instant claims in light of the specification encompasses transitory signals. But, transitory signals are not within one of the four statutory categories (i.e. non-statutory subject matter). See MPEP 2106(I). Instant para [015] recites; “The image processing device 4 includes the storage unit 6. The storage unit 6 is a nonvolatile storage unit, and is a non-transitory tangible storage medium which stores, in a non-transitory manner, a program and data that can be read by a computer. The non-transitory tangible storage medium is formed by a semiconductor memory, a magnetic disk, or the like.” Claims directed toward a non-transitory computer readable medium may qualify as a manufacture and make the claim patent-eligible subject matter. MPEP 2106(I). Therefore, the Examiner recommends amending the claims to recite a “non-transitory computer-readable medium” that would resolve this issue. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Miura (US 20220121869 A1) in combination with Panetta (US 20240290008 A1). Regarding Claim 1: Miura discloses an image processing device (Refer to para [001]; “The present invention relates to a biometric authentication device and a biometric authentication method that authenticate an individual using biometric information.”) comprising: an acquisition unit configured to acquire a transillumination image including a lens area and an area therearound (Refer to para [015 and 037]; “The device may be a personal authentication device that includes authentication processing, or it may be a finger image acquisition device or finger feature image extraction device that specializes in acquiring finger images while authentication processing is performed outside the device.”) the transillumination image having such a luminance gradient that a luminance gradually decreases outward from a center indicating a local-maximum luminance (Refer to para [063]; “These biometric features can be acquired by enhancing biometric features such as line pattern features of epidermis and blood vessels and speckles features of fat lobules by filtering processes such as general edge-enhancement filters, Gabor filters, matched filters, etc., and then binarizing or tri-valued processing the results. The biometric features can also be acquired by extracting luminance gradient features from key points such as SIFT (Scale-Invariant Feature Transform) features.”) an extraction unit configured to extract the lens area in the transillumination image (Refer to para [016]; “a background removal unit extracts a background region from the image of the light intensity of the third wavelength and removes the background region from the images of the light intensity of the first and second wavelengths, respectively; and an authentication processing unit extracts various features of the living body from the images of the light intensity of the first and second wavelengths with the background area removed, matches them with the biometric features for each individual registered in advance, calculates the degree of similarity for each biometric feature, and performs biometric authentication to identify the individual based on the degree of similarity of the various biometric features.”) a correction unit configured to perform correction so as to reduce the luminance gradient which is illumination unevenness in an image of the lens area extracted by the extraction unit (Refer to para [117]; “Similarly, if the usage situation allows contact with the device, a finger rest 261 can be provided to physically fix the finger as shown in FIG. 13B. This makes it easier to understand the direction and position of the fingertip, and also stabilizes the position of the finger, reducing posture fluctuation and improving authentication accuracy. It also contributes to improved operability for users who find it difficult to hold their fingers still by floating them in the air.”) and a binarization processing unit configured to binarize pixel values of the corrected image, to obtain a binarized image for lens opacity determination (Refer to para [087]; “As another embodiment, first binarize the ambient light image with a certain threshold value, and define the pixels that have values as the background area. Then, the background area in the visible light image and infrared light image is replaced with black pixels. Then, the remaining bright area is binarized using a predetermined threshold or discriminant analysis, and the part with a value can be determined as the finger area. In the case of binarization, a morphology operation may be applied to remove fine noise components.”). Miura does not expressly disclose “local-maximum luminance” calculations although Miura expressly calculates luminance gradients. Panetta teaches “receiving a hyperspectral image of a scene; selecting one or more bands from the hyperspectral image; and processing the selected bands to produce a color image. In an embodiment, processing the selected bands to produce a color image…” More specifically, Panetta teaches a processing application including a “local enhancement factor. [xmin]m,n and [xmax].m,n [that] represent the local block-based minimum and local maximum luminance level.” Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Miura by adding a processor for calculating a “local enhancement function” as rejected above by Panetta. The suggestion/motivation for combining the teachings of Miura and Panetta would have been such that “The goal of luminance calculation is to obtain a uniform luminance image from selected bands.” (para [185], Panetta). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Miura and Panetta in order to obtain the specified claimed elements of Claim 1. It is for at least the aforementioned reasons that the Examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding Claim 14: Miura discloses an image processing method (Refer to para [001]; “The present invention relates to a biometric authentication device and a biometric authentication method that authenticate an individual using biometric information.”) comprising: an acquisition step of acquiring a transillumination image including a lens area and an area therearound (Refer to para [015 and 037]; “The device may be a personal authentication device that includes authentication processing, or it may be a finger image acquisition device or finger feature image extraction device that specializes in acquiring finger images while authentication processing is performed outside the device.”) the transillumination image having such a luminance gradient that a luminance gradually decreases outward from a center indicating a local-maximum luminance (Refer to para [063]; “These biometric features can be acquired by enhancing biometric features such as line pattern features of epidermis and blood vessels and speckles features of fat lobules by filtering processes such as general edge-enhancement filters, Gabor filters, matched filters, etc., and then binarizing or tri-valued processing the results. The biometric features can also be acquired by extracting luminance gradient features from key points such as SIFT (Scale-Invariant Feature Transform) features.”) an extraction step of extracting the lens area in the transillumination image (Refer to para [016]; “a background removal unit extracts a background region from the image of the light intensity of the third wavelength and removes the background region from the images of the light intensity of the first and second wavelengths, respectively; and an authentication processing unit extracts various features of the living body from the images of the light intensity of the first and second wavelengths with the background area removed, matches them with the biometric features for each individual registered in advance, calculates the degree of similarity for each biometric feature, and performs biometric authentication to identify the individual based on the degree of similarity of the various biometric features.”) a correction step of performing correction so as to reduce the luminance gradient which is illumination unevenness in an image of the lens area extracted in the extraction step (Refer to para [117]; “Similarly, if the usage situation allows contact with the device, a finger rest 261 can be provided to physically fix the finger as shown in FIG. 13B. This makes it easier to understand the direction and position of the fingertip, and also stabilizes the position of the finger, reducing posture fluctuation and improving authentication accuracy. It also contributes to improved operability for users who find it difficult to hold their fingers still by floating them in the air.”) and a binarization processing step of binarizing pixel values of the corrected image, to obtain a binarized image for lens opacity determination (Refer to para [087]; “As another embodiment, first binarize the ambient light image with a certain threshold value, and define the pixels that have values as the background area. Then, the background area in the visible light image and infrared light image is replaced with black pixels. Then, the remaining bright area is binarized using a predetermined threshold or discriminant analysis, and the part with a value can be determined as the finger area. In the case of binarization, a morphology operation may be applied to remove fine noise components.”). Miura does not expressly disclose “local-maximum luminance” calculations although Miura expressly calculates luminance gradients. Panetta teaches “receiving a hyperspectral image of a scene; selecting one or more bands from the hyperspectral image; and processing the selected bands to produce a color image. In an embodiment, processing the selected bands to produce a color image…” More specifically, Panetta teaches a processing application including a “local enhancement factor. [xmin]m,n and [xmax].m,n [that] represent the local block-based minimum and local maximum luminance level.” Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Miura by adding a processor for calculating a “local enhancement function” as rejected above by Panetta. The suggestion/motivation for combining the teachings of Miura and Panetta would have been such that “The goal of luminance calculation is to obtain a uniform luminance image from selected bands.” (para [185], Panetta). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Miura and Panetta in order to obtain the specified claimed elements of Claim 14. It is for at least the aforementioned reasons that the Examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding Claim 15: Miura discloses a computer-readable storage medium having stored therein a program (Refer to para [043]; “The memory 12 stores the program to be executed by CPU 11.”) configured to cause a computer to execute: an acquisition step of acquiring a transillumination image including a lens area and an area therearound (Refer to para [015 and 037]; “The device may be a personal authentication device that includes authentication processing, or it may be a finger image acquisition device or finger feature image extraction device that specializes in acquiring finger images while authentication processing is performed outside the device.”) the transillumination image having such a luminance gradient that a luminance gradually decreases outward from a center indicating a local-maximum luminance (Refer to para [063]; “These biometric features can be acquired by enhancing biometric features such as line pattern features of epidermis and blood vessels and speckles features of fat lobules by filtering processes such as general edge-enhancement filters, Gabor filters, matched filters, etc., and then binarizing or tri-valued processing the results. The biometric features can also be acquired by extracting luminance gradient features from key points such as SIFT (Scale-Invariant Feature Transform) features.”) an extraction step of extracting the lens area in the transillumination image (Refer to para [016]; “a background removal unit extracts a background region from the image of the light intensity of the third wavelength and removes the background region from the images of the light intensity of the first and second wavelengths, respectively; and an authentication processing unit extracts various features of the living body from the images of the light intensity of the first and second wavelengths with the background area removed, matches them with the biometric features for each individual registered in advance, calculates the degree of similarity for each biometric feature, and performs biometric authentication to identify the individual based on the degree of similarity of the various biometric features.”) a correction step of performing correction so as to reduce the luminance gradient which is illumination unevenness in an image of the lens area extracted in the extraction step (Refer to para [117]; “Similarly, if the usage situation allows contact with the device, a finger rest 261 can be provided to physically fix the finger as shown in FIG. 13B. This makes it easier to understand the direction and position of the fingertip, and also stabilizes the position of the finger, reducing posture fluctuation and improving authentication accuracy. It also contributes to improved operability for users who find it difficult to hold their fingers still by floating them in the air.”) and a binarization processing step of binarizing pixel values of the corrected image, to obtain a binarized image for lens opacity determination (Refer to para [087]; “As another embodiment, first binarize the ambient light image with a certain threshold value, and define the pixels that have values as the background area. Then, the background area in the visible light image and infrared light image is replaced with black pixels. Then, the remaining bright area is binarized using a predetermined threshold or discriminant analysis, and the part with a value can be determined as the finger area. In the case of binarization, a morphology operation may be applied to remove fine noise components.”). Miura does not expressly disclose “local-maximum luminance” calculations although Miura expressly calculates luminance gradients. Panetta teaches “receiving a hyperspectral image of a scene; selecting one or more bands from the hyperspectral image; and processing the selected bands to produce a color image. In an embodiment, processing the selected bands to produce a color image…” More specifically, Panetta teaches a processing application including a “local enhancement factor. [xmin]m,n and [xmax].m,n [that] represent the local block-based minimum and local maximum luminance level.” Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Miura by adding a processor for calculating a “local enhancement function” as rejected above by Panetta. The suggestion/motivation for combining the teachings of Miura and Panetta would have been such that “The goal of luminance calculation is to obtain a uniform luminance image from selected bands.” (para [185], Panetta). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Miura and Panetta in order to obtain the specified claimed elements of Claim 15. It is for at least the aforementioned reasons that the Examiner has reached a conclusion of obviousness with respect to the claim in question. Allowable Subject Matter Claims 2-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art either singly or in combination does not teach, disclose or suggest at least the following claim limitation(s): “… the correction unit includes a filter generation unit configured to generate an average filter from an original image of the lens area extracted by the extraction unit, a division unit configured to divide each pixel value of the original image by the average filter, and a multiplication unit configured to multiply each pixel value of an image that has undergone division by the division unit, by an average luminance of the original image.” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIA M THOMAS whose telephone number is (571)270-1583. The examiner can normally be reached M-Th 8:30am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen (Steve) Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MIA M. THOMAS Primary Examiner Art Unit 2665 /MIA M THOMAS/Primary Examiner Art Unit 2665
Read full office action

Prosecution Timeline

Apr 02, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602938
SYSTEM AND METHOD FOR ITEM IDENTIFICATION USING CONTAINER-BASED CLASSIFICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12597154
IMAGE ANALYSIS METHOD AND CAMERA APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12590529
BOREHOLE IMAGE INTERPRETATION AND ANALYSIS
2y 5m to grant Granted Mar 31, 2026
Patent 12586220
SYSTEM AND METHOD FOR CAMERA RE-CALIBRATION BASED ON AN UPDATED HOMOGRAPHY
2y 5m to grant Granted Mar 24, 2026
Patent 12579220
Visual Attribute Expansion via Multiple Machine Learning Models
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+15.7%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 703 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month