Prosecution Insights
Last updated: April 19, 2026
Application No. 18/351,555

HEURISTIC PERFORMANCE METRIC HINTS

Non-Final OA §103§112
Filed
Jul 13, 2023
Examiner
CHU JOY, JORGE A
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
314 granted / 408 resolved
+22.0% vs TC avg
Strong +37% interview lift
Without
With
+37.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
41 currently pending
Career history
449
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
3.2%
-36.8% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§103 §112
DETAILED ACTION Claims 1-27 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/15/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The following claim language is unclear: As to independent claims 1 and 16 recite “a first hardware processing unit is configured to: execute software code, which includes a processing job to be executed; select at least one of the hardware processing units from the plurality of hardware processing units to perform the processing job” it is unclear from the claim language whether the first unit executes the job or acts as a scheduler to select other processing units to execute the job. For examination purposes, examiner interprets the first hardware processing unit to act as a scheduler and select at least one processing unit to perform the processing job. Claims 2-14 and 18-27 are dependent on claim 1 and 16 and fail to cure the deficiencies set forth above for the independent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Alt et al. (US 11677681 B1), in view of Wang et al. (US 20230418688 A1). Regarding claim 1, Alt teaches the invention substantially as claimed including a device (Col. 5, lines 1-2: Management server 140) comprising: a memory [configuration file] configured to store data indicating performance metrics of a plurality of hardware processing units (Col. 2, lines 7-17: gathering configuration information for the computing system, which include computing resources such as CPUs, GPUs, FPGAs and other accelerators, memory, network cards and interfaces, and storage. Some of the resources may be bare metal, and others may be virtualized/containerized. The configuration information may for example, include the type and number of computing resources and also interconnectivity attributes such as bandwidth and latency for the processors (e.g., CPUs and GPUs), network cards and interfaces, and memory in the computing system.; Col. 2, lines 59-60: The configuration information from the computing system may gathered from one or more system configuration files e.g., memory; Col. 3, lines 9-13: the method may be implemented in software as a management application that is stored on computer-readable storage medium (e.g., hard drives, solid state drives or “SSDs”) and run on a management server/node in the computing system.); and a first hardware processing unit is configured to: execute software code, which includes a processing job to be executed (Col. 5, lines 30-37: Management server 140 is configured to run a distributed computing management application 170 that receives jobs and manages the allocation of resources from distributed computing system 100 to run them. In some embodiments, management server 140 may be a high-performance computing (HPC) system with many computing nodes, and management application 170 may execute on one or more of these nodes (e.g., master nodes) in the cluster.); select at least one of the hardware processing units from the plurality of hardware processing units to perform the processing job based on a given process type of the processing job and the performance metrics of the hardware processing units for the given process type, wherein the selected at least one hardware processing unit is configured to process the processing job (Col. 8, line 37 through Col. 9, line 48: Turning now to FIG. 8, a flowchart of an example embodiment of a method for allocating computing devices in a computing system is shown. Configuration information about the distributed computing system is gathered (step 800). This may include reading system configuration files to determine the quantity and location of available computing resources in the distributed computing system (e.g., type and number of processes, interconnect types, memory quantity and location, and storage locations). This may also include running test jobs (e.g., micro-benchmarks) that are timed to measure the interconnectivity of the computing resources. While the earlier examples above illustrated GPU and CPU interconnectivity, interconnectivity to other types of resources (e.g., memory and storage bandwidth and latency) can also be used in selecting which computing resources are allocated to jobs. As resources are allocated and come online and go offline for various reasons (e.g., maintenance), this system configuration information may be updated… Jobs to be executed in the computing system are received (step 810), and requirements for those jobs are determined (step 820), e.g., the number of processors or amount of memory required. The jobs may include applications run in batch mode (i.e., without user interaction) or interactive mode, and some may be within containers or virtual machines. One or more quality of service (QoS) levels are also determined (step 830) and applied to the jobs…The QoS level for a job may for example be automatically determined based on one or more of the following: (i) a performance characterization of the job, which may include data generated from a test run of the job, (ii) data from prior executions of the job, (iii) performance data from similar jobs, (iv) parameters specified by the user submitting the job, (iv) other job-related attributes such as application type (e.g. Linear Regression, Logistic Regression. Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Random Forest), (v) which libraries or data sets are used by the job, or (vi) user or administrator input… With the job QoS level determined, a selected set of processors from the computing system are allocated and bound to the job (step 840). The set of processors are selected from the set of available processors in the system that meet the QoS level and the job requirements). While Alt teaches a configuration file storing performance metrics, which as a memory stores information for retrieval, Alt does not explicitly teaches the configuration file stored in a memory. However, Wang in a similar field of endeavor teaches balancing of workloads across computing nodes to provide maximum performance with respect to the time to perform workloads and to reduce bottlenecks (See at least [0011], [0022], [0024], [0026-27]). Further, Wang teaches a memory (Fig. 3, Memory 330, Utilization Characteristics 334; [0026] Computing system 300 may include a processing device 310 and memory 330. Memory 330 may include volatile memory devices (e.g., random access memory (RAM)), non-volatile memory devices (e.g., flash memory) and/or other types of memory devices.; [0027] In one example, the processing device 310 may execute a workload scheduler 115 to determine where a new workload is to be allocated. The workload scheduler 115 may include an energy consumption profile component 312, a utilization determination component 314, an energy consumption estimator 316, and a workload placement component 316. The energy consumption profile component 312 may retrieve or otherwise obtain one or more energy consumption profiles for computing systems executing a baseline workload. For examples, the energy consumption profiles may include the energy consumption of a type of computing system and the utilization and performance characteristics of the computing system while executing a benchmark workload. The utilization determination component 314 may query each computing node (e.g., computing node 350) of the computing cluster to determine utilization characteristics 334 of the computing nodes.; [0028]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Wang with the teachings of Alt to show utilization characteristics/configuration files stored and retrieved from a memory of the computing system. The modification would have been motivated by the desire of combining known methods of data storage to yield predictable results of retrieval from memory. Regarding claim 2, Alt teaches wherein the hardware processing units include at least one of: a central processing unit (CPU); a graphics processing unit (GPU) (Abstract: Computing resources such as CPUs, GPUs, network cards, and memory are allocated to jobs submitted to the system by a scheduler.); a video decoder; a matrix multiplication unit; a neural engine block; an encryption engine; a decryption engine; or a hardware accelerator (Col. 2, lines 9-10: computing resources such as CPUs, GPUs, FPGAs and other accelerators). Regarding claim 3, Alt teaches wherein the data includes metric descriptor fingerprints and corresponding performance metrics (Col. 2, lines 12-20: The configuration information may for example, include the type and number of computing resources and also interconnectivity attributes such as bandwidth and latency for the processors (e.g., CPUs and GPUs), network cards and interfaces, and memory in the computing system. The configuration information may be stored in a graph (e.g., mesh), with interconnectivity attributes includes (e.g. as costs between graph/mesh nodes).). Regarding claim 4, Alt teaches wherein one of the metric descriptor fingerprints includes any one or more of the following: a metric name (Col. 2, lines 12-20: The configuration information may for example, include the type and number of computing resources and also interconnectivity attributes such as bandwidth and latency for the processors (e.g., CPUs and GPUs), network cards and interfaces, and memory in the computing system. The configuration information may be stored in a graph (e.g., mesh), with interconnectivity attributes includes (e.g. as costs between graph/mesh nodes).); a processing unit identifier; a process type; a performance domain; a metric creator identifier; and a metric creation timestamp. Regarding claim 5, the combination teaches wherein one or the performance metrics includes one or more of the following: a processing speed metric; a latency metric (Alt’sCol. 2, lines 12-20: latency); a power consumption metric (Wang’s [0012] The workload scheduler may obtain an energy consumption profile for hardware types included in the computing nodes of the computing platform. The energy consumption profiles may include power consumption of servers (e.g., hardware) with different workloads at different utilization levels. For example, the energy consumption profiles may indicate power consumption on different hardware with different workloads. The workload scheduler may then generate a correlation model between the energy consumption profiles and utilization characteristics of the types of hardware.); a performance metric based on processing speed and latency (Alt’s Col. 7, lines 4-21: Turning now to FIG. 3, an illustration of an example mesh representation of configuration information for an example computing device with multiple GPUs is shown. As noted above, in many computing systems, the interconnections between different computing resources may not be homogeneous. In this and subsequent examples, a thicker line indicates a higher-speed interconnect (e.g., NVLINK 2X), and a thinner line indicates a lower-speed interconnect (e.g. NVLINK 1X). In this example, GPUs 310A-H are connected (e.g., via point-to-point connections or via a switch) by connections with two different bandwidths, e.g., a lower bandwidth connection 330 represented by the thinner lines and a higher bandwidth connection 320 represented by the thicker lines. In this example, GPU 310H is only connected to GPUs 310A-F by lower bandwidth connections 330, while GPUs 310A-F are interconnected by higher bandwidth connections 320.); and a performance metric based on processing speed, latency and power consumption. Regarding claim 6, Alt teaches further comprising a second hardware processing unit, wherein: the second hardware processing unit is configured to cause a test process to be processed on at least some of the hardware processing units (Col. 2, lines 57-63: The computing system may comprise a number of nodes, in one or more clusters, both local and remote (e.g., cloud resources). The configuration information from the computing system may gathered from one or more system configuration files, or it may be empirically generated by running test jobs that are instrumented to measure values such as maximum/average bandwidth and latency.); the at least some hardware processing units are configured to process the test process (Col. 3, lines 23-26: For example, the mapper may be configured to run one or more test jobs to measure the available bandwidths between the computing resources and include those in the mesh model.); the second hardware processing unit is configured to perform measurements related to performance of the at least some hardware processing units processing the test process (Col. 3, lines 23-26); and the second hardware processing unit is configured to update the data of the performance metrics responsively to the performed measurements (Col. 8, lines 37-54: Turning now to FIG. 8, a flowchart of an example embodiment of a method for allocating computing devices in a computing system is shown. Configuration information about the distributed computing system is gathered (step 800). This may include reading system configuration files to determine the quantity and location of available computing resources in the distributed computing system (e.g., type and number of processes, interconnect types, memory quantity and location, and storage locations). This may also include running test jobs (e.g., micro-benchmarks) that are timed to measure the interconnectivity of the computing resources. While the earlier examples above illustrated GPU and CPU interconnectivity, interconnectivity to other types of resources (e.g., memory and storage bandwidth and latency) can also be used in selecting which computing resources are allocated to jobs. As resources are allocated and come online and go offline for various reasons (e.g., maintenance), this system configuration information may be updated.). Regarding claim 7, Wang teaches further comprising a first die, a second die, and a data communication bus between the first die and second die, wherein the hardware processing units include a second hardware processing unit disposed on the first die and a third hardware processing unit disposed on the second die, the first hardware processing unit being configured to select the second hardware processing unit to perform at least part of the processing job based on the given process type of the processing job, and the performance metrics of the second hardware processing unit and the third hardware processing unit for the given process type (Fig. 2 and 3; [0023-27]; [0029] FIG. 4 is a flow diagram of a method 400 of allocating workloads to computing nodes based on energy efficiency, in accordance with some embodiments. Method 400 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, a processor, a processing device, a central processing unit (CPU), a system-on-chip (SoC), etc.), software (e.g., instructions running/executing on a processing device), firmware (e.g., microcode), or a combination thereof. In some embodiments, at least a portion of method 400 may be performed by workload scheduler 115 of FIGS. 1-3.; [0031] Method 400 begins at block 410, where the processing logic obtains an energy consumption profile for a set of computing nodes. For example, the processing logic may retrieve an energy consumption profile for a computing system type and/or hardware type for each of the computing nodes of a cluster of computing nodes. Thus, if a computing cluster includes multiple types of computing systems or different computing hardware, multiple energy consumption profiles may be retrieved (e.g., one for each system or hardware type). In some examples, each energy consumption profile may include energy consumption metrics, computing resource utilization metrics, performance metrics, etc. collected during execution of one or more benchmark workloads on a particular type of system. The energy consumption profiles based on the benchmark workloads may be generated external to the computing cluster. In another example, the energy consumption profile(s) may be generated by running a benchmark workload on one or more of the computing nodes of the cluster.; [0034] At block 440, the processing logic determines placement of the new workload on one or more of the computing nodes in view of the estimated energy consumption for each of the computing nodes and resource requirements of the new workload. For example, the processing logic may allocate the new workload to the computing node that is estimated to use the least amount of energy to execute the new workload. In some examples, the processing logic may place the new workload to balance performance and energy consumption. For example, the processing logic may place the new workload to meet a minimum performance threshold and also minimize energy consumption for the new workload. The processing logic may place the workload in any manner to track and reduce energy consumption for the new workload.). Regarding claim 8, Wang teaches further comprising a system-on-chip comprising a first chiplet and a second chiplet, wherein the hardware processing units include a second hardware processing unit disposed on the first chiplet and a third hardware processing unit disposed on the second chiplet, the first hardware processing unit being configured to select the second hardware processing unit to perform at least part of the processing job based on the given process type of the processing job and the performance metric of the second hardware processing unit and the third hardware processing unit for the given process type (Fig. 2 and 3; [0023-27]; [0029] FIG. 4 is a flow diagram of a method 400 of allocating workloads to computing nodes based on energy efficiency, in accordance with some embodiments. Method 400 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, a processor, a processing device, a central processing unit (CPU), a system-on-chip (SoC), etc.), software (e.g., instructions running/executing on a processing device), firmware (e.g., microcode), or a combination thereof. In some embodiments, at least a portion of method 400 may be performed by workload scheduler 115 of FIGS. 1-3.; [0031]). Regarding claim 9, Alt teaches wherein the selected at least one hardware processing unit includes a second hardware processing unit and a third hardware processing unit, the first hardware processing unit being configured to apportion the processing job between the second hardware processing unit and the third hardware processing unit according to a ratio of the performance metrics of the second hardware processing unit to the third hardware processing unit (Col. 1, lines 29-37: As the time required for a single system or processor to complete many of these tasks would be too great, they are typically divided into many smaller tasks that are distributed to large numbers of processors such as central processing units (CPUs) or graphics processing units (GPUs) that work in parallel to complete them more quickly. Specialized computing systems having large numbers of processors that work in parallel have been designed to aid in completing these tasks more quickly and efficiently.; Col. 3, lines 46-67: the scheduler may select of fractional portions of a computing resource (e.g., half of a GPU), and may oversubscribe resources (e.g., allocate 2 jobs to the same GPU at the same time) and permit two or more jobs to concurrently share a GPU. The scheduler may select resources based on performance feedback collected from the execution of earlier similar jobs to achieve best-fit across multiple jobs awaiting scheduling. In another example, the scheduler may select resources using a multi-dimensional best fit analysis based on one or more of the following: processor interconnect bandwidth, processor interconnect latency, processor-to-memory bandwidth and processor-to-memory latency. The scheduler may also be configured to select computing resources for a job according to a predefined placement affinity (e.g., all-to-all, tile, ring, closest, or scattered). For example, if a closest affinity is selected, the scheduler may select nodes that are closest to a particular resource (e.g., a certain non-volatile memory holding the data to be processed). In tile affinity, assigning jobs to processors in a single node (or leaf or branch in a hierarchical configuration) may be preferred when selecting resources.). Regarding claim 10, Alt teaches wherein the first hardware processing unit is configured to select the at least one hardware processing unit based on a maximum allowed latency of the processing job and at least one latency metric of the performance metrics of the hardware processing units for the given process type (Col. 2, lines 28-33: For example, one QoS level may be a minimum bandwidth required between GPUs allocated to the job, or a maximum power consumption level, maximum power budget, minimum cost, minimum memory bandwidth, or minimum memory quantity or configuration for the job.). Regarding claim 11, Alt teaches wherein the selected at least one hardware processing unit includes a second hardware processing unit and a third hardware processing unit, the first hardware processing unit is configured to apportion the processing job between the second hardware processing unit and the third hardware processing unit of the hardware processing units according to a ratio of processing speed metrics of the performance metrics of the second hardware processing unit to the third hardware processing unit (Col. 1, lines 29-37: As the time required for a single system or processor to complete many of these tasks would be too great, they are typically divided into many smaller tasks that are distributed to large numbers of processors such as central processing units (CPUs) or graphics processing units (GPUs) that work in parallel to complete them more quickly. Specialized computing systems having large numbers of processors that work in parallel have been designed to aid in completing these tasks more quickly and efficiently.; Col. 3, lines 46-67: the scheduler may select of fractional portions of a computing resource (e.g., half of a GPU), and may oversubscribe resources (e.g., allocate 2 jobs to the same GPU at the same time) and permit two or more jobs to concurrently share a GPU. The scheduler may select resources based on performance feedback collected from the execution of earlier similar jobs to achieve best-fit across multiple jobs awaiting scheduling. In another example, the scheduler may select resources using a multi-dimensional best fit analysis based on one or more of the following: processor interconnect bandwidth, processor interconnect latency, processor-to-memory bandwidth and processor-to-memory latency. The scheduler may also be configured to select computing resources for a job according to a predefined placement affinity (e.g., all-to-all, tile, ring, closest, or scattered). For example, if a closest affinity is selected, the scheduler may select nodes that are closest to a particular resource (e.g., a certain non-volatile memory holding the data to be processed). In tile affinity, assigning jobs to processors in a single node (or leaf or branch in a hierarchical configuration) may be preferred when selecting resources.). Regarding claim 12, Alt teaches wherein the first hardware processing unit is configured to select the at least one hardware processing unit based on at least one power consumption metric of the performance metrics of the hardware processing units for the given process type (Col. 2 lines 28-33; Col. 9, lines 21-37: As part of determining the QoS level, the job may be profiled to determine what impact different selections of computing resources may have on the job. This may be performed for example by comparing the job with a database of earlier reference jobs that have been already characterized for interconnectivity impact (e.g., based on the type of application or libraries the job uses). For some jobs where there is little cross-resource communication, resource interconnectivity may not have a significant performance impact. These jobs may then be assigned a least cost QoS level as they may be scheduled without concern regarding resource interconnectivity. Job metadata may also be used to determine QoS level. For example, a user or administrator may designate a job as “fastest available” or subject to a specified limit for power consumption when they submit a job though the management application's user interface). Regarding claim 13, Alt teaches wherein the first hardware processing unit is configured to select the at least one hardware processing unit based on the at least one power consumption metric responsively to the device being in a power save mode (Col. 3, lines 40-45: The scheduler may be configured to mask/unmask selected resources based on user or administrator input or other system-level information (e.g. avoiding nodes/processors that are unavailable, that are experiencing abnormally high temperatures or that are on network switches that are experiencing congestion).; Col. 8, lines 52-54: As resources are allocated and come online and go offline for various reasons (e.g., maintenance), this system configuration information may be updated.). Regarding claim 14, the combination teaches wherein the first hardware processing unit is configured to run an operating system on which to execute the software code, which includes the processing job to be executed, the operating system being configured to select the at least one hardware processing units to perform the processing job based on the given process type of the processing job and the performance metrics of the hardware processing units for the given process type (Alt’s Col. 5, line 30 through Col. 6, line 10; Management server 140 is configured to run a distributed computing management application 170 that receives jobs and manages the allocation of resources from distributed computing system 100 to run them. In some embodiments, management server 140 may be a high-performance computing (HPC) system with many computing nodes, and management application 170 may execute on one or more of these nodes (e.g., master nodes) in the cluster. Management application 170 is preferably implemented in software (e.g., instructions stored on a non-volatile storage medium such as a hard disk, flash drive, or DVD-ROM), but hardware implementations are possible. Software implementations of management application 170 may be written in one or more programming languages or combinations thereof, including low-level or high-level languages, with examples including Java, Ruby, JavaScript, Python, C, C++, C #, or Rust. The program code may execute on the management server 140, partly on management server 140 and partly on other computing devices in distributed computing system 100, Col. 8, line 64 through Col. 9, line 48; Wang’s Fig. 1 shows Host OS running Workload Scheduler 115; [0026-27, 0031, 34]). Regarding claim 15, Wang teaches wherein the operating system is configured to cause the at least one hardware processing unit to process the processing job ([0017] Host OS 120 manages the hardware resources of the computer system and provides functions such as inter-process communication, scheduling, memory management, and so forth.; [0020] In some examples, host system 110A may include a workload scheduler 115 to schedule and allocate computing workloads to computing nodes of the computing cluster (e.g., among host system 110A-B and any additional host systems of the cluster). The workload scheduler may receive a workload and/or an instruction to execute a workload from client device 105 (e.g., a device of a user or customer of the computing platform). The workload scheduler 115 may determine resource requirements of the workload and allocate the workload to a computing node of the computer system 100 for optimal energy efficiency.). Regarding claim 16, it is a method type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale above. Regarding claim 17, it is a system type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale above. Further the additional limitation an integrated circuit is taught by Alt in Col. 11, lines 52-54: “Such embodiments may be configured to execute via one or more processors, such as multiple processors that are integrated into a single system” and Wang in [0035] “Method 500 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, a processor, a processing device, a central processing unit (CPU), a system-on-chip (SoC), etc.), software (e.g., instructions running/executing on a processing device), firmware (e.g., microcode), or a combination thereof.” Regarding claim 18, it is a system type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale above. Regarding claim 19, it is a system type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale above. Regarding claim 20, it is a system type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale above. Regarding claim 21, it is a system type claim having similar limitations as claim 5 above. Therefore, it is rejected under the same rationale above. Regarding claim 22, it is a system type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale above. Regarding claim 23, it is a system type claim having similar limitations as claim 9 above. Therefore, it is rejected under the same rationale above. Regarding claim 24, it is a system type claim having similar limitations as claim 10 above. Therefore, it is rejected under the same rationale above. Regarding claim 25, it is a system type claim having similar limitations as claim 11 above. Therefore, it is rejected under the same rationale above. Regarding claim 26, it is a system type claim having similar limitations as claim 12 above. Therefore, it is rejected under the same rationale above. Regarding claim 27, it is a system type claim having similar limitations as claim 13 above. Therefore, it is rejected under the same rationale above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE A CHU JOY-DAVILA whose telephone number is (571)270-0692. The examiner can normally be reached Monday-Friday, 6:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORGE A CHU JOY-DAVILA/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Jul 13, 2023
Application Filed
Jan 09, 2026
Non-Final Rejection — §103, §112
Apr 12, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602244
OFFLOADING PROCESSING TASKS TO DECOUPLED ACCELERATORS FOR INCREASING PERFORMANCE IN A SYSTEM ON A CHIP
2y 5m to grant Granted Apr 14, 2026
Patent 12596565
USER ASSIGNED NETWORK INTERFACE QUEUES
2y 5m to grant Granted Apr 07, 2026
Patent 12591821
DYNAMIC ADJUSTMENT OF WELL PLAN SCHEDULES ON DIFFERENT HIERARCHICAL LEVELS BASED ON SUBSYSTEMS ACHIEVING A DESIRED STATE
2y 5m to grant Granted Mar 31, 2026
Patent 12585490
MIGRATING VIRTUAL MACHINES WHILE PERFORMING MIDDLEBOX SERVICE OPERATIONS AT A PNIC
2y 5m to grant Granted Mar 24, 2026
Patent 12579065
LIGHTWEIGHT KERNEL DRIVER FOR VIRTUALIZED STORAGE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+37.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month