Prosecution Insights
Last updated: April 19, 2026
Application No. 18/517,597

DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD

Non-Final OA §103
Filed
Nov 22, 2023
Examiner
SETH, MANAV
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
716 granted / 789 resolved
+28.7% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
13 currently pending
Career history
802
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 789 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 1. The information disclosure statements (IDS) submitted on 11/05/2024 and 11/22/2023 have been considered by the examiner. Claim Interpretation 2. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 3. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. 4. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “storage unit”, “selection unit”, “holding unit”, “transfer unit”, “execution unit”, “designation unit”, “control unit” in claims 1-8, 10 and 12. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claim(s) 1 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Guttmann, U.S. Patent Publication No. 2019/0294999 A1, and further in view of Kimura et al., U.S. Patent Publication No. 2020/0372332. Claims 1 and 12 recite similar subject matter and therefore the same citations as applied to one applies to other for rejection purposes. Regarding claim 12, claim 12 recites “A data processing method for a data processing apparatus including a storage unit configured to store a plurality of types of parameter groups to be used in a plurality of types of recognition tasks and a holding unit configured to hold parameter groups, the data processing method comprising: selecting two or more recognition tasks to be executed from among the plurality of types of recognition tasks; transferring parameter groups to be used in the two or more recognition tasks in sequence from the storage unit to the holding unit; and executing the two or more recognition tasks in sequence using the parameter groups held in the holding unit.”. Gutmann discloses in para [0201] - “the first inference model obtained by Step 1110 may be configured to detect a first type of items in image data captured from an environment, and the second inference model obtained by Step 1110 may be configured to determine properties of the detected items by analyzing the image data. For example, the first type of items may include faces, the first inference model may include a face detector, and the second inference model may be configured to estimate properties of the detected faces, such as age, gender, hair color, and so forth. In another example, the first type of items may include objects of selected types, the first inference model may include an object detector, and the second inference model may be configured to estimate properties of the detected objects, such as size, volume, color, and so forth”; Gutmann further discloses in para [0178] - “the inference models may be selected of a plurality of alternative inference models based on the first version of the set of training examples. In some examples, the inference models and/or information related to the inference models may be read from memory (for example from memory unit 210, shared memory module 410, etc.), read from a blockchain, received through a communication network (such as communication network 130) using a communication device (such as communication module 230), received from an external device (such as mobile phone 111, tablet 112, PC 113, remote storage 140, NAS 150, server 300, cloud platform 400, etc.), generated (for example by training one or more machine learning algorithms using one or more hyper-parameters and/or the first version of the set of training examples), selected (for example as described above), and so forth”; and further discloses in para [0175] – “Component 1006 may determine to which inference models to provide the input (or selected portions of the input), for example as illustrated in FIG. 10F. In some examples, the determination to which inference models to provide the input (or selected portions of the input) may be based on a type of the input (for example as described above), may be based on a time associated with the input (for example as described above), may be based on a geographical area associated with the input (for example as described above), may be based on a contextual situation associated with the input (for example as described above), may be based on an entity associated with the input (for example as described above), and so forth. In some examples, Component 1006 may analyze the input (or selected parts of the input) to determine to which inference models to provide the input (or selected portions of the input), and/or to select which portions of the input to provide to which inference model. For example, a machine learning algorithm may be trained using training examples to determine which inference models should be provided with which portions of an input based on the content of the input and/or information associated with the input (such as type of the input, time associated with the input, geographical area associated with the input, contextual situation associated with the input, entity associated with the input, etc.), and the trained machine learning algorithm may be used to determine to which portion of the input to provide to which inference models based on the input and/or information associated with the input.” – where selecting Inference Models to perform tasks implies to selecting two or more recognition tasks to be executed from among the plurality of types of recognition tasks such as Face detection or objects detection and then estimate properties of the detected face or of detected objects, and selecting can be considered here as done by selection unit; where different Inference Models are here considered as reciting plurality of types of parameter groups to be used in a plurality of types of recognition tasks, which as cited in para[0178] are stored and read from Memory (a storage unit); Gutmann further discloses executing two or more recognition tasks in sequence using the Inference models (para [0174] teaches processing units to execute the tasks – “In some embodiments, Component 1006 and/or Component 1008 may be performed by various aspects of apparatus 200, server 300, cloud platform 400, computational node 500, and so forth. For example, processing units 220 may execute software instructions stored within memory units 210 and/or within shared memory modules 410, and the software instructions may be configured to cause processing units 220 to perform the function of Component 1006 and/or Component 1008. In another example, Component 1006 and/or Component 1008 may be implemented by a dedicated hardware. In yet another example, computer readable medium (such as a non-transitory computer readable medium) may store data and/or computer implementable instructions for carrying out the functions of Component 1006 and/or Component 1008”; further see para [0201] - “the first inference model obtained by Step 1110 may be configured to detect a first type of items in image data captured from an environment, and the second inference model obtained by Step 1110 may be configured to determine properties of the detected items by analyzing the image data. For example, the first type of items may include faces, the first inference model may include a face detector, and the second inference model may be configured to estimate properties of the detected faces, such as age, gender, hair color, and so forth. In another example, the first type of items may include objects of selected types, the first inference model may include an object detector, and the second inference model may be configured to estimate properties of the detected objects, such as size, volume, color, and so forth” – where the tasks are executed in sequence; see figure 10C). Gutmann as cited teaches storing and reading a plurality of types of parameter groups (inference models parameters) from the memory (storage unit), and executing the tasks using the models using processing units. Gutmann does not explicitly teaches a holding unit and a transferring unit. Examiner here asserts with respect to “storage unit”, “holding unit”, “transfer unit” and “execution unit” – these components/units are inherent components of a computer system, where “storage unit” is/can be interpreted as a Hard Disk/ ROM (Read only memory); “holding unit” is/can be interpreted as RAM (Random access memory)/ Buffer; “transfer unit” is/can be interpreted as a DMAC (Direct Memory Access Controller”; and “execution unit” is/can be interpreted as a Processor/CPU. As well known in the art, during the operation of a computer process – The program(s) and program parameter(s) is/are saved in Hard Disk/ROM, which are pulled into the RAM (a working memory) by the DMAC, and CPU works on the data pulled in the RAM – the same well-known process is being claimed in this instant claim. Examiner further cites Kimura to provide the evidentiary teachings of this well-known process. Kimura discloses a “storage unit” as a ROM in para [0039] – “The ROM 1608 stores computer programs and data for causing the CPU 1607 to execute or control processes described later as those performed by the image processing system, data sets for operation of the recognition processing unit 1601, and the like. Such data sets include parameters defining a CNN used by the recognition processing unit 1601 to detect a specific object from the input image”; and para [0047] – “the CNN included in a data set stored in the ROM”. Kimura further discloses “holding unit” as a RAM in para [0042] – “The RAM 1609 has areas for storing captured images captured by the image input unit 1600, computer programs and data loaded from the ROM 1608, data transferred from the recognition processing unit 1601, and the like. Further, the RAM 1609 has a work area used when the CPU 1607 executes various processes. In this manner, the RAM 1609 can appropriately provide various areas”. Kimura further discloses “transfer unit” as DMAC in para [0035] – “A DMAC (Direct Memory Access Controller) 1602 functions as a data transfer unit that controls data transfer between each processing unit on the image bus 1603 and each processing unit on the CPU bus 1604. An image input unit 1600, a recognition processing unit 1601, a pre-processing unit 1606, and a DMAC 1602 are connected to the image bus 1603, and a CPU 1607, a ROM 1608, a RAM 1609 is connected to the CPU bus 1604”; para [0041] – “The data set stored in the ROM 1608 is transferred to the recognition processing unit 1601 by the DMAC 1602”; and para [0044] – “A holding unit 104 stores the control parameters transferred from the ROM 1608 by the DMAC 1602”. Kimura further discloses “execution unit” as CPU in para [0038] – “The CPU 1607 executes various processes using computer programs and data stored in a ROM (Read Only Memory) 1608 and a RAM (Random Access Memory) 1609. As a result, the CPU 1607 controls the operation of the entire image processing system, and executes or controls the processes described later as being performed by the image processing system”. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to use a similar holding unit such as RAM and transfer unit such as DMAC as taught by Kimura in the invention of Gutmann. A person having ordinary skill in the art would have been motivated before the effective filing date of the claimed invention to use a similar holding unit such as RAM and transfer unit such as DMAC as taught by Kimura in the invention of Gutmann, in order to achieve faster data access and processing, crucial for applications that require quick response times, further effectively managing memory more effectively, allowing for better allocation and utilization. Regarding claim 1, claim 1 has been similarly analyzed and rejected as per citations made in the rejection of claim 12. 8. Claims 2-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Manav Seth whose telephone number is (571) 272-7456. The examiner can normally be reached on Monday to Friday from 8:30 am to 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Sumati Lefkowitz, can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000 /Manav Seth/ Primary Examiner, Art Unit 2672 February 2, 2026
Read full office action

Prosecution Timeline

Nov 22, 2023
Application Filed
Feb 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597243
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12579633
PERIODIC-PATTERN BACKGROUND REMOVAL
2y 5m to grant Granted Mar 17, 2026
Patent 12567269
METHOD OF TRAINING IMAGE CAPTIONING MODEL AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12561969
Object Re-Identification Apparatus and Method Thereof
2y 5m to grant Granted Feb 24, 2026
Patent 12555368
Method for Temporal Correction of Multimodal Data
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
98%
With Interview (+7.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 789 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month