Prosecution Insights
Last updated: April 19, 2026
Application No. 18/355,282

DATA PROCESSING SYSTEM

Final Rejection §103
Filed
Jul 19, 2023
Examiner
METZGER, MICHAEL J
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Arm Limited
OA Round
4 (Final)
90%
Grant Probability
Favorable
5-6
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
435 granted / 482 resolved
+35.2% vs TC avg
Moderate +8% lift
Without
With
+8.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
509
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 482 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments 1. Applicant’s arguments, filed December 16th, 2025, with respect to the rejections of the independent claims have been fully considered and are persuasive in light of the claim amendments. Therefore, the rejections have been withdrawn. However, upon further consideration, new grounds of rejection are made in view of Nield (US 2018/0365057). Specification 2. The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3. Claims 1-10 and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shippy (US 2014/0173312) in view of King et al (US 2020/0073467, herein King) and Nield et al (US 2018/0365057, herein Nield). Regarding claim 1, Shippy teaches a data processing resource for performing data processing tasks for a host processor, the data processing resource comprising: an iterator unit to process the request and generate a workload comprising one or more tasks for the requested processing job ([0023], [0032], [0036], front end units to retrieve and dispatch tasks); one or more execution units to perform the one or more tasks, wherein the iterator unit is configured to allocate the one or more tasks to the one or more execution units based on control signals from the control circuitry ([0019], [0024-0032], [0037], execution units & pipeline stage to perform task processing), wherein configuration circuitry is further configured to switch an operation mode of at least one execution unit from a normal operation mode to a reduced operation mode, wherein the switch from the normal operation mode to the reduced operation mode comprises configuring the control circuitry to: control the iterator unit to reduce an amount of tasks allocated to the at least one execution unit to prevent the iterator unit from allocating new tasks to the at least one execution unit ([0026], [0028], [0031-0033], [0041], [0044], operate core in low intensity mode by throttling execution and disabling execution resources in low intensity or “little core” mode, stopping input of new decoded instructions into pipeline when reducing power). Shippy fails to teach the processor comprising control circuitry to receive, from the host processor, a request for the data processing resource to perform a requested processing job. King teaches a data processing resource comprising control circuitry to receive, from a host processor, a request for a data processing resource to perform a requested processing job ([0022-0023], [0038], workload management module on CPU controls execution on GPU data processors), wherein the control circuitry is further configured to switch an operation mode of at least one execution unit from a normal operation mode to a reduced operation mode ([0024], [0038], GPUs can operate in full or reduced power modes responsive to the workload management module). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Shippy and King to utilize explicit control circuitry in the processing system. While Shippy does not explicitly disclose the use of a control circuit or module, both Shippy and King disclose the use of circuitry in a host CPU to control execution in coprocessors or data processors by causing the data processor to operate in reduced power or low intensity modes. As the use of control circuitry is a routine and conventional aspect of the microprocessor art, the combination would merely entail a simple substitution of known prior elements to achieve predictable results, and thus would’ve been obvious to one of ordinary skill in the art. Shippy and Kirk fail to teach wherein the control circuitry is configured to reduce an outstanding task limit associated with the at least one execution unit when the outstanding task limit has been reached. Nield teaches a data processing resource comprising an iterator unit and one or more execution units ([0044], scheduling unit, [0045], execution units) wherein control circuitry is configured to reduce an outstanding task limit associated with the at least one execution unit to limit a number of tasks that can be allocated to the at least one execution unit when the outstanding task limit has been reached ([0036], maximum number of active scheduled tasks, [0081-0089], [0095], [0102], scheduling tasks according to allowed number of active tasks and de- or re-activation of tasks). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Shippy and King with those of Nield to reduce task allocation through limiting the allocation of tasks. While Shippy does not explicitly disclose that the disabling of front end circuitry in order to reduce outstanding tasks may be done by reducing the tasks allocated by the iterator unit, one of ordinary skill in the art would understand that controlling scheduling circuitry is a routine and conventional aspect of the microprocessor art. As both Shippy and Nield disclose techniques for reducing power consumption through reducing a number of outstanding tasks being executed, utilizing a scheduling unit to control task allocation as taught by Nield would merely entail a simple substitution of known prior art elements to achieve predictable results. Regarding claim 2, the combination of Shippy, King, and Nield teaches the data processing resource of claim 1, wherein the control circuitry is configured to switch the operation mode of the at least one execution unit from the normal operation mode to the reduced operation mode upon receiving a stop notification to prepare the at least one execution unit for a stop (Shippy [0041], King [0022], use reduced power mode then power down processing unit). Regarding claim 3, the combination of Shippy, King, and Nield teaches the data processing resource of claim 2, wherein the control circuitry is configured to complete the stop on the at least one execution unit to stop an operation of the at least one execution unit upon the at least one execution unit completing all allocated tasks (Shippy [0041], King [0022], power down processing unit when execution pipeline drains). Regarding claim 4, the combination of Shippy, King, and Nield teaches the data processing resource of claim 3, wherein the control circuitry is configured to power down the at least one execution unit upon completing the stop (Shippy [0041], King [0022], power down the reduced power mode processing unit). Regarding claim 5, the combination of Shippy, King, and Nield teaches the data processing resource of claim 2, wherein the stop notification comprises an indication of the at least one execution unit for the stop (King [0029], Shippy [0031], [0033], [0047], reducing power to individual redundant elements of CPU core or GPU). Regarding claim 6, the combination of Shippy, King, and Nield teaches the data processing resource of claim 1, wherein the control circuitry is configured to collect utilization data from the one or more execution units and provide the utilization data of the one or more execution units (King [0027-0031], collect runtime information containing previous execution history & information). Regarding claim 7, the combination of Shippy, King, and Nield teaches the data processing resource of claim 1, wherein upon receiving a power-down notification for a possible power down of the at least one execution unit, the control circuitry is configured to select the at least one execution unit switched to the reduced operation mode for power down (Shippy [0031], [0041], [0047], King [0022], power down the reduced power mode processing unit or individual elements of processor core). Regarding claim 8, the combination of Shippy, King, and Nield teaches the data processing resource of claim 1, wherein upon receiving a power-down notification for a possible power down of the at least one execution unit, the control circuitry is configured to monitor the at least one execution unit and override the power-down notification when the at least one execution unit meets one or more override criteria (Shippy [0027], King [0039], increase power of cores when necessary for workload processing). Regarding claim 9, the combination of Shippy, King, and Nield teaches the data processing resource of claim 8, wherein the one or more override criteria comprise an expected increase in utilization, an actual increase in utilization, the at least one execution unit being reserved for a dedicated purpose, or a combination thereof (King [0035], [0038-0039], [0046], [0048-0049], increasing power to processing unit based on expected processing requirements). Regarding claim 10, the combination of Shippy, King, and Nield teaches the data processing resource of claim 8, wherein the control circuitry is configured to power down the at least one execution unit switched to the reduced operation mode when the at least one execution unit does not meet the one or more override criteria (Shippy [0026-0027], King [0039], power down unnecessary cores according to expected workload requirements & current utilization). Regarding claim 12, the combination of Shippy, King, and Nield teaches the data processing resource of claim 1, wherein the control circuitry is configured to control the iterator unit to reduce the amount of task allocated to the at least one execution unit by controlling the iterator unit to stop allocating tasks to the at least one execution unit (Shippy [0032], [0041], reduce task allocation by disabling redundant front end resources that perform dispatch). Regarding claim 13, the combination of Shippy, King, and Nield teaches the data processing resource of claim 1, wherein the control circuitry is configured to control the iterator unit to reduce the amount of task allocated to the at least one execution unit by reducing a size of a task to be allocated to the execution unit (Shippy [0040-0041], reducing size of resources to have power reduced). Regarding claim 14, the combination of Shippy, King, and Nield teaches the data processing resource of claim 1, wherein the control circuitry is configured to control the iterator unit to reduce the amount of task allocated to the at least one execution unit by controlling the iterator unit to reduce a complexity of the one or more tasks to reduce an amount of processing required to perform the one or more tasks (Shippy [0040-0041], functional throttling of reduced power cores). Claim 15 refers to a system embodiment of the data processing resource embodiment of claim 1 above, further comprising one or more input/output interfaces (Shippy [0030], I/O interfaces). Claim 16 refers to a system embodiment of the data processing resource embodiment of claim 1 above, further comprising a system controller to manage power consumption of the data processing resource (Shippy [0021], CPU power management hardware). Regarding claim 17, the combination of Shippy, King, and Nield teaches the data processing system of claim 16, wherein the control circuitry is configured to collect utilization data from the one or more execution units and provide the utilization data to the system controller (King [0027-0031], collect runtime information containing previous execution history & information); and the system controller is configured to monitor the utilization data of the one or more execution units and select the at least one execution unit to be switched to the reduced operation mode when the utilization of the at least one execution unit is declining (Shippy [0031], [0041], [0047], King [0022], power down the reduced power mode processing unit or individual elements of processor core according to execution workload requirements). Claims 18, 19, and 20 refer to method embodiments of the resource embodiments of claims 1, 2, and the combination of claims 3 & 4, respectively. Therefore, the above rejections for claims 1-4 are applicable to claims 18-20. Regarding claim 21, the combination of Shippy, King, and Nield teaches the data processing resource of claim 1, wherein: the outstanding task limit comprises an outstanding fragment task limit for a fragment endpoint or the outstanding task limit comprises an outstanding compute task limit for a compute endpoint (Nield [0036], [0089], maximum number of scheduled tasks that may be included in a pipeline running list). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Moloney (US 2015/0046678) discloses a processor for scheduling tasks according to a supported limit. Niggemeier (US 2010/0228955) discloses a processor with a scheduling unit to manage a workload maximum threshold. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J METZGER whose telephone number is (571)272-3105. The examiner can normally be reached Monday-Friday 8:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached at 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL J METZGER/ Primary Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

Jul 19, 2023
Application Filed
Jan 08, 2025
Non-Final Rejection — §103
Apr 10, 2025
Response Filed
Jun 09, 2025
Final Rejection — §103
Sep 05, 2025
Request for Continued Examination
Sep 19, 2025
Response after Non-Final Action
Sep 19, 2025
Non-Final Rejection — §103
Nov 18, 2025
Interview Requested
Dec 01, 2025
Applicant Interview (Telephonic)
Dec 01, 2025
Examiner Interview Summary
Dec 16, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591517
FETCHING VECTOR DATA ELEMENTS WITH PADDING
2y 5m to grant Granted Mar 31, 2026
Patent 12578965
Biased Indirect Control Transfer Prediction
2y 5m to grant Granted Mar 17, 2026
Patent 12566610
MICROPROCESSOR WITH APPARATUS AND METHOD FOR REPLAYING LOAD INSTRUCTIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12566607
ROBUST, EFFICIENT MULTIPROCESSOR-COPROCESSOR INTERFACE
2y 5m to grant Granted Mar 03, 2026
Patent 12561139
ENCODING AND DECODING VARIABLE LENGTH INSTRUCTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
90%
Grant Probability
98%
With Interview (+8.1%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 482 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month