Prosecution Insights
Last updated: April 19, 2026
Application No. 18/957,977

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD

Non-Final OA §103
Filed
Nov 25, 2024
Examiner
BENNETT, STUART D
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
54%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
245 granted / 355 resolved
+11.0% vs TC avg
Minimal -15% lift
Without
With
+-15.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
31 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 355 resolved cases

Office Action

§103
DETAILED ACTION The present Office action is in response to the application filing on 25 NOVEMBER 2024 and the Information Disclosure Statements. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The Information Disclosure Statements (IDS) submitted on 11/25/2024, 01/08/2025, and 07/09/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the Information Disclosure Statements are being considered by the Examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: --RETINAL ABNORMALITY INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD--. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Claim 1 - elongation information acquisition unit, data acquisition unit, and analysis unit. Claim 7 - display control unit. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Upon review of the specification, the following is the disclosed corresponding structure: FIG. 2 illustrates computer components for the information processing unit 100 for implementing the elongation information acquisition unit, data acquisition unit, analysis unit, and display control unit. Paragraphs [0038-0040] and [0048-0050] describe the structural components, such as the use of a processor and memory. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2022/0335633 A1 (hereinafter “Iwase”) in view of U.S. Publication No. 2021/0158525 A1 (hereinafter “Mizobe”). Regarding claim 1, Iwase discloses an information processing apparatus (FIG. 1, controlling apparatus 200) comprising: an elongation information acquisition unit (FIG. 1, signal processing unit 190) configured to acquire information regarding an elongation state of an eyeball to be analyzed ([0086], “The signal processing unit 190 performs generation of an image, analysis of the generated image, generation of visualized information of an analysis result, etc., based on signals output from the differential detector 129, the APD 152, and the anterior ocular segment camera 165, respectively.” [0262], “the thickness and state of the entire retina, and the shape of the retina can be more accurately grasped with single-time imaging.” FIG. 3A depicts imaging including the shape of the eyeball. [0239], “the ocular axial length of the eye to be examined:” e.g., elongation of the eye); a data acquisition unit (FIG. 1, signal processing unit 190) configured to acquire data including information regarding a thickness of a retinal layer of the eyeball ([0086], “The signal processing unit 190 performs generation of an image, analysis of the generated image, generation of visualized information of an analysis result, etc., based on signals output from the differential detector 129, the APD 152, and the anterior ocular segment camera 165, respectively.” [0262], “the thickness and state of the entire retina, and the shape of the retina can be more accurately grasped with single-time imaging.” [0271], “The analyzing unit 1908 can, for example, measure the curvature of a boundary line and the thickness of the retina”); and an analysis unit (FIG. 1, signal processing unit 190) configured to analyze ([0280], “The analysis map 1915 is a map image indicating the analysis results generated by the analyzing unit 1908. The map image may be, for example, the above-described curvature map, a layer thickness map, or a blood vessel density map.” Note, the analyzing unit 1908 performs image processing to determine the shape of the eye and the thickness of the retinal layer and then the data is used for outputting the analysis map used in diagnosis. [0271], “The analyzing unit 1908 performs image analysis processing on tomographic images, motion contrast images, and three-dimensional data. The analyzing unit 1908 can, for example, measure the curvature of a boundary line and the thickness of the retina from a tomographic image and a three-dimensional tomographic image.” [0283], “the analysis of the curvature radius and the layer thickness of the retinal layer in image diagnosis”), wherein the analysis unit includes a trained model configured to use, as input, at least the data including the information regarding the thickness of the retinal layer to output information regarding ([0290], “The generator intends to generate data similar to training data, and the discriminator discriminates whether data comes from training data or from generated models. In this case, learning is performed for a generation model such that, when a tomographic image of an arbitrary cross section position of three-dimensional data built from radial scan data, as illustrated by the tomographic image 911 in FIG. 9C, is input, a tomographic image as if it is actually imaged is generated. Accordingly, the data generating unit 1904 may generate, by using the learned model, three-dimensional data directly from a plurality of tomographic images obtained by radial scan.” Note, the topographical image includes image data of the thickness of the retinal layer imaged). Iwase fails to expressly disclose an abnormality in the thickness of the retinal layer. Although Iwase discloses topographical images are used for diagnosing diseases in the prior-art, there is no express disclosure of diagnosing an abnormality related to the thickness of the retinal layer. See Iwase, [0003]. However, Mizobe teaches an abnormality in the thickness of the retinal layer ([0659], “the image processing apparatus 20, 80, 2800 or 4400 may identify the kind of disease or an abnormal site of an eye to be examined from an image using a separately prepared learned model” [0382], “when an image depicting the retina layers obtained by the imaging of the OCT that uses the posterior ocular segment as the imaging target is input to the learned model trained with the second training data, the learned model outputs the region label image for the retina layers depicted in the image.” [0663], “cause analysis results such as the thickness of a desired layer”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have analyzed an abnormality, as taught by Mizobe ([0659]), in Iwase’s invention. One would have been motivated to modify Iwase’s invention, by incorporating Mizobe’s invention, to accurately process topographical images of an eye even when imaging diseased retina layers (Mizobe: [0008]). Regarding claim 2, Iwase and Mizobe disclose every limitation of claim 1, as outlined above. Additionally, Iwase discloses wherein the trained model is configured to further use, as input, the information regarding the elongation state ([0256], “the three-dimensional data using the tomographic images after the actual shape modification can be input to the learned model.” Note, the shape modification of the topographical image includes information of the ocular axial length, which is the elongation state. See FIG. 16, step S1604 and FIG. 17 with accompanying paragraphs). Regarding claim 3, Iwase and Mizobe disclose every limitation of claim 1, as outlined above. Additionally, Iwase discloses wherein the information regarding the elongation state includes one or a combination of two or more of scalar values that each represent any one of an ocular axial length, a visual acuity, eyeball refraction data, or shape of the eyeball ([0239], “a distance pvl from the pivot point P1 to a retina surface Re, corresponding to the ocular axial length of the eye.” FIG. 16, step S1604 describes “perform actual shape modification” and calculating a refraction index, ocular axial length, and shape of the eyeball, see section Actual Shape Modification at [0236]). Regarding claim 4, Iwase and Mizobe disclose every limitation of claim 1, as outlined above. Additionally, Iwase discloses wherein the data including the information regarding the thickness of the retinal layer includes at least any one selected from the group consisting of an optical coherence tomographic image, a map image in which information indicating the thickness of the retinal layer is projected onto a plane along a fundus of an eye, retinal layer segmentation data, an image of the eyeball photographed by a magnetic resonance imaging (MRI) apparatus, and an image of the eyeball photographed by a computer tomography (CT) apparatus ([0003], “a retinal layer can be observed in three dimensions by using a tomographic image imaging apparatus of eyes, such as an OCT apparatus using the optical coherence tomography (OCT).” [0054], “an optical coherence tomography apparatus (OCT apparatus) that images a subject body.” [0057], “the OCT apparatus includes a tomographic image obtaining unit that obtains a tomographic image of the subject body.” [0262], “the thickness and state of the entire retina, and the shape of the retina can be more accurately grasped with single-time imaging.” FIG. 3B is a tomographic image and FIG. 3C is a fundus image). Regarding claim 5, Iwase and Mizobe disclose every limitation of claim 1, as outlined above. Additionally, Mizobe discloses wherein a result obtained through analysis by the analysis unit includes at least any one selected from the group consisting of a map image indicating a degree of abnormality in the thickness of the retinal layer, a true or false value indicating presence or absence of a disease, a scalar value indicating a possibility of having a disease, and thickness data of the retinal layer expected to be obtained when the thickness of the retinal layer is normal ([0664], “An analysis result may be displayed using an analysis map, or using sectors which indicate statistical values corresponding to respective divided regions.” [0355], “specify the disease of a subject, or to observe the degree of the disease.” [0155], “The thickness graph 712 of the retina is a graph that illustrates the thickness of the retina derived from the boundaries 715, 716. Additionally, the thickness map 702 represents the thickness of the retina derived from the boundaries 715, 716 in a color map. Note that, in FIG. 7, although the color information corresponding to the thickness map 702 is not illustrated for description, practically, the thickness map 702 can display the thickness of the retina corresponding to each coordinate in the SLO image 701 according to a corresponding color map”). The same motivation of claim 1 applies equally as well to claim 5. Regarding claim 6, Iwase and Mizobe disclose every limitation of claim 5, as outlined above. Additionally, Mizobe discloses wherein the disease includes at least any one selected from the group consisting of glaucoma, posterior staphyloma, retinal detachment, diabetic retinopathy, retinal choroidal atrophy, macular hemorrhage, myopic traction maculopathy, and myopic choroidal neovascularization ([0669], “diagnosis results such as results relating to glaucoma or age-related macular degeneration.” [0174], “peripapillary chorioretinal atrophy.” [0228], “recognize, for example, the thickness of the entire retina due to bleeding, neovascularization, etc., the defect in photoreceptor related to eyesight, or the thinning of the choroid coat due to pathological myopia.” [0663], “the value (distribution) of a parameter relating to a region including at least one abnormal site such as drusen, a neovascular site, leucoma (hard exudates), pseudodrusen or the like may be displayed as an analysis result”). The same motivation of claim 1 applies equally as well to claim 6. Regarding claim 7, Iwase and Mizobe disclose every limitation of claim 1, as outlined above. Additionally, Iwase discloses further comprising a display control unit configured to perform control for displaying a result of analysis performed by the analysis unit (FIG. 18, display S1806. [0277], “In step S1806, the controlling unit 191 displays, on the display unit 192, tomographic images, motion contrast images, three-dimensional data, and various map images serving as analysis results”). Regarding claim 8, Iwase and Mizobe disclose every limitation of claim 7, as outlined above. Additionally, Iwase discloses wherein the display control unit is configured to perform control for simultaneously displaying the result of the analysis performed by the analysis unit and the information regarding the thickness of the retinal layer (FIG. 19A depicts simultaneously displaying multiple images, including results of the analysis and a layer thickness map. [0280], “The analysis map 1915 is a map image indicating the analysis results generated by the analyzing unit 1908. The map image may be, for example, the above-described curvature map, a layer thickness map”). Regarding claim 9, Iwase and Mizobe disclose every limitation of claim 8, as outlined above. Additionally, Iwase discloses wherein the display control unit is configured to switch a display method for the information regarding the thickness of the retinal layer based on the information regarding the elongation state (FIG. 19A, checkboxes Real Shape 1916 and Real Scale 1917. [0278], “The check box 1916 is the indication of a selecting unit for selecting whether or not to apply the actual shape modification processing for reducing the scanning distortion by the image modifying unit 1907. Additionally, the check box 1917 is the indication of a selecting unit for selecting whether or not to apply the aspect ratio distortion modification processing by the controlling unit 191.” Note, the modifications are based on the elongation state, as described in the section Actual Shape Modification starting in [0236]). Regarding claim 10, Iwase and Mizobe disclose every limitation of claim 1, as outlined above. Additionally, Mizobe discloses wherein the analysis unit includes a plurality of the trained models, and is configured to select and use at least one trained model from the plurality of the trained models based on the elongation state ([0241], “when the first processing unit 822 includes a plurality of learned models, the selecting unit 1524 can select a learned model used for the detection processing by the first processing unit 822, based on the imaging conditions and the learned content related to the learned models of the first processing unit 822.” [0659], “the image processing apparatus 20, 80, 2800 or 4400 can automatically select a learned model to be used in the aforementioned processing based on the kind of disease or the abnormal site that was identified using the separately prepared learned model.” [0704] describes a first learned model based on the analysis result such as an analysis map, which would be based on the size of the eyeball, as per the rejection of claim 1). The same motivation of claim 1 applies equally as well to claim 10. Regarding claim 11, Iwase and Mizobe disclose every limitation of claim 10, as outlined above. Additionally, Mizobe discloses wherein information used for training the plurality of the trained models includes training information regarding the elongation state of the eyeball, and wherein the analysis unit is configured to acquire, from each of the plurality of the trained models, distribution information regarding the elongation state included in the training information regarding the elongation state of the eyeball, and to select at least one trained model based on the distribution information and the information regarding the elongation state of the eyeball to be analyzed ([0010], “wherein the learned model has been obtained by using training data including data indicating at least one layer of a plurality of layers in a tomographic image of an eye to be examined.” [0128], “The training data for the machine learning model according to the present example includes pairs of one or more input data and ground truth. Specifically, a tomographic image 401 obtained by the OCT is listed as input data, and a boundary image 402 in which the boundaries of the retina layers are specified for the tomographic image is listed as ground truth.” [0241], “The selecting unit 1524 selects image processing performed on a tomographic image, based on the imaging conditions obtained by the obtaining unit 21 and the learned contents (training data) related to the learned model of the first processing unit 822 […] Additionally, when the first processing unit 822 includes a plurality of learned models, the selecting unit 1524 can select a learned model used for the detection processing by the first processing unit 822, based on the imaging conditions and the learned content related to the learned models of the first processing unit 822.” Note, the input tomographic image includes information on the elongation state and [0718] describes the learned models having distinct distribution values for parameters). The same motivation of claim 1 applies equally as well to claim 11. Regarding claim 12, Iwase and Mizobe disclose every limitation of claim 1, as outlined above. Additionally, Iwase discloses wherein the analysis unit is configured to correct, based on the information regarding the elongation state, the information regarding the abnormality in the thickness of the retinal layer which has been output from the trained model (FIG. 19A, checkboxes Real Shape 1916 and Real Scale 1917. [0278], “The check box 1916 is the indication of a selecting unit for selecting whether or not to apply the actual shape modification processing for reducing the scanning distortion by the image modifying unit 1907. Additionally, the check box 1917 is the indication of a selecting unit for selecting whether or not to apply the aspect ratio distortion modification processing by the controlling unit 191.” Note, the modifications (e.g., corrections) are based on the elongation state, as described in the section Actual Shape Modification starting in [0236]. [0280] describes the output including a layer thickness map). Regarding claim 13, the limitations are the same as those in claim 1; however, written in process form instead of machine form. Therefore, the same rationale of claim 1 applies equally as well to claim 13. Regarding claim 14, Iwase and Mizobe disclose every limitation of claim 13, as outlined above. Additionally, Iwase discloses a non-transitory storage medium having stored thereon a program for causing a computer to execute the information processing method of claim 13 ([0007], “a computer-readable medium storing a program.” [0316], “a computer of the system or the apparatus reads and executes the program”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to STUART D BENNETT whose telephone number is (571)272-0677. The examiner can normally be reached Monday - Friday from 9:00 AM - 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STUART D BENNETT/Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Nov 25, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574559
ENCODER, A DECODER AND CORRESPONDING METHODS FOR ADAPTIVE LOOP FILTER ADAPTATION PARAMETER SET SIGNALING
2y 5m to grant Granted Mar 10, 2026
Patent 12568300
ELECTRONIC APPARATUS, METHOD FOR CONTROLLING ELECTRONIC APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR GUI CONTROL ON A DISPLAY
2y 5m to grant Granted Mar 03, 2026
Patent 12563191
CROSS-COMPONENT SAMPLE OFFSET
2y 5m to grant Granted Feb 24, 2026
Patent 12542925
METHOD AND DEVICE FOR INTRA-PREDICTION
2y 5m to grant Granted Feb 03, 2026
Patent 12542934
ZERO-DELAY PANORAMIC VIDEO BIT RATE CONTROL METHOD CONSIDERING TEMPORAL DISTORTION PROPAGATION
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
54%
With Interview (-15.0%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 355 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month