Prosecution Insights
Last updated: April 19, 2026
Application No. 18/108,277

LOGICAL CLUSTER PARTITIONING

Non-Final OA §101§103
Filed
Feb 10, 2023
Examiner
WU, BENJAMIN C
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
456 granted / 522 resolved
+32.4% vs TC avg
Strong +16% interview lift
Without
With
+16.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
551
Total Applications
across all art units

Statute-Specific Performance

§101
19.8%
-20.2% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
0.8%
-39.2% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 522 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1–20 are presented for examination in a non-provisional application filed on 02/10/2023. Priority 3. Acknowledgment is made of applicant’s claim for foreign priority based on an application filed in INDIA (IN) on Aug. 24, 2022 (202211048215). Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Drawings 4. The drawings were received on 02/10/2023 (in the filings). These drawings are acceptable. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 5. Claims 1–6, 8–13, and 15–19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 6. As to independent claim 1, the claim recites: “cause two or more computer systems to be selected to perform two or more portions of one or more programs in parallel based, at least in part, on the two or more computer systems’ ability to perform the two or more portions at substantially a same performance level.” As to independent claims 8 and 15, they recite similar language of commensurate scope as claim 1. These limitations, as currently drafted and within their respective claim, represent processes that, under a broadest reasonable interpretation, covers performance in the mind (including observation, evaluation, judgment, opinion, etc.) but for the recitation of generic computer components. That is, other than reciting the use of “a processor comprising one or more circuits” (claim 1); and “one or more processors” (claim 8), to perform these steps, nothing in the claim element precludes the step from practically being performed in the mind or using pencil and paper (see MPEP 2106.04(a)(2) – Examples of Concepts The Courts Have Identified As Abstract Ideas, discussing abstract ideas or concepts relating to organizing or analyzing information in a way that can be performed mentally or is analogous to human mental work). For example, but for the use of generic computers, the performance of these steps in the context of the claims reasonably encompasses the user mentally and/or manually performing the steps of mentally 1) selecting two or more computer systems for performing or executing tasks. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application (under Prong Two of Step 2A) (I) Generic Computing Device For instance, claim 1 recites the additional element of “a processor comprising one or more circuits” and claim 8 recites the additional element of “one or more processors” that perform these steps. These computer components, functionalities, and/or services are all recited at a high-level of generality (i.e., as a generic computing device performing a generic computer function of processing computer instructions and/or outputting data) such that it amounts no more than mere instructions to apply the exception using a generic computer components such as processors, basic processor instructions and/or software components or programs. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. 7. As to dependent claims 2–6, 9–13, and 16–19, each of these claims either (1) recites additional step(s) that covers performance in the mind; or (2) merely restricts or links the process step, information or data to a particular type, technological environment, or field of use; (3) amounts to insignificant extra-solution activity to the judicial exception such as data input and output/transmission; or (4) recites a function which amounts to no more than a recitation of the words “apply it” (or an equivalent) and is no more than mere instructions to implement an abstract idea or other exception on a computer; and thus as a whole is also directed and confined to the same process set forth in claims 1, 8, and 15. Therefore, these claims do not individually or collectively add an inventive concept or additional element(s) amounting to significantly more than the abstract idea itself. These claims are therefore not drawn to eligible subject matter as they are directed to an abstract idea without significantly more. For instance, dependent claim 2 reciting “cause the two or more computer systems to be selected … comprises identifying one or more logical partitions grouping one or more available computer systems into at least one homogenous grouping based, at least in part, on one or more tags associated with the one or more available computer systems,” merely recites additional step(s) that covers performance in the mind (“identifying”). Dependent claim 3 reciting “estimate the two or more computer systems' ability to perform the two or more portions at substantially the same performance level based, at least in part, on one or more attributes of one or more other programs,” merely recites additional step(s) that covers performance in the mind. Dependent claim 4 reciting “cause the two or more computer systems to be selected is further based, at least in part, on calculating a partition score associated with one or more logical partitions, wherein the partition score is based, at least in part, on one or more of a system state or task fitness,” merely recites additional step(s) that covers performance in the mind. Dependent claim 5 reciting “wherein the two or more computer systems' ability to perform the two or more portions at substantially the same performance is based, at least in part, on one or more of a preferred node topology or a number of nodes,” merely restricts or links the process step, information or data to a particular type, technological environment, or field of use. Dependent claim 6 reciting “wherein the two or more computer systems' ability to perform the two or more portions at substantially the same performance level is based, at least in part, on one or more requirements of the one or more programs,” merely restricts or links the process step, information or data to a particular type, technological environment, or field of use. As to dependent claims 9–13, and 16–19, they are the corresponding system claims correspond to at least one of claims 2–6. Therefore, these claims do not individually or collectively 1) integrated the abstract idea into a practical application, nor do they 2) include additional element(s) amounting to significantly more than the abstract idea itself. Practical Application Integration Claims 7, 14, and 20 includes element(s) integrating the abstract idea into a practical application. Examiner’s Remarks 8. Examiner refers to and explicitly cites particular pages, sections, figures, paragraphs or columns and lines in the references as applied to Applicant’s claims to the extent practicable to streamline prosecution. Although the cited portions of the references are representative of the best teachings in the art and are applied to meet the specific limitations of the claims, other uncited but related teachings of the references may be equally applicable as well. It is respectfully requested that, in preparing responses to the rejections, the Applicant fully considers not only the cited portions of the references, but also the references in their entirety, as potentially teaching, suggesting or rendering obvious all or one or more aspects of the claimed invention. Abbreviations 9. Where appropriate, the following abbreviations will be used when referencing Applicant’s submissions and specific teachings of the reference(s): i. figure / figures: Fig. / Figs. ii. column / columns: Col. / Cols. iii. page / pages: p. / pp. References Cited 10. (A) Bahramshahry et al., US 2020/0026579 A1 (“Bahramshahry”). (B) STRENSKI, US 2022/0326993 A1 (“Strenski”). (C) Pyla et al., US 2023/0065444 A1 (“Pyla”). Notice re prior art available under both pre-AIA and AIA 11. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. A. 12. Claims 1, 3–8, 10–15, and 17–20 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Bahramshahry in view of (B) Strenski. See “References Cited” section, above, for full citations of references. 13. Regarding claim 1, (A) Bahramshahry teaches/suggests the invention substantially as claimed, including: “A processor comprising: one or more circuits to cause two or more computer systems to be selected to perform two or more portions of one or more programs in parallel” (Fig. 8 and ¶ 265: Processor 802 represents one or more general-purpose processing devices such as a microprocessor, central processing unit … Processor 802 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC); ¶ 357: parallel computation of sub-parts of a given workload, requires splitting a workload among a set of processing units called workers. The determination of how to split such a workload and to where the sub-parts are to be distributed is performed by the scheduler; ¶ 365: breakdown or split permissible between the 115,000 different unique tests and the minimum sized executable sub-component of the workload on any given VM; ¶ 660: the distributable workload comprises execution of multiple pre-defined tests against one or more of a browser, a code change submission, an application, or an operating system, wherein each of the pre-defined tests are executable independent of each other); Bahramshahry does not explicitly teach “two or more computer systems to be selected … based, at least in part, on the two or more computer systems’ ability to perform the two or more portions at substantially a same performance level.” (B) Strenski, in the context of Bahramshahry’s teachings, however teaches or suggests implementing “two or more computer systems to be selected … based, at least in part, on the two or more computer systems’ ability to perform the two or more portions at substantially a same performance level.” (¶ 29: For example, if a computational job that is to be run on the HPC system involves parallel processing, then the scheduler node 200 selects a group of nodes (from the cluster of nodes), which are having substantially similar performance metrics. The term ‘substantially similar performance’ implies that, in one example, nodes with a variation of around 10% in a performance metric are also considered when an exact match of performance metrics is not achieved. Further, if the computational job involves parallel processing then based on the measured performance metrics, the scheduler node 200 selects the set of nodes that have substantially similar performance-reducing/ eliminating selection of nodes with varied performance distribution). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (B) Strenski with those of (A) Bahramshahry to schedule and distribute parallel workloads across cloud/computing resources having similar performance metrics or measurements. The motivation or advantage to do so is to minimize processing delays (bottlenecks) and ensure compliance with QoS (SLT) objectives (see Bahramshahry, ¶ 81: The scheduling service 145 is further enabled to prioritize resource allocation according to need for any given type of workload with the specified QoS of the workload provided in the form of a Service Level Target (SLT). An exemplary SLT may define the 95th percentile expected completion time and resource usage for a given task). 14. Regarding claim 3, Strenski teaches or suggests: “estimate the two or more computer systems’ ability to perform the two or more portions at substantially the same performance level based, at least in part, on one or more attributes of one or more other programs.” (¶ 26: nodes. For the one or more test-computing jobs when executed on each node, one or more performance metrics of a particular node can be determined. The test-computing job may be an application developed to test the performance of the processing element and/or other elements/engines of the nodes. For example, one test-computing job may include one or more sub-applications. The sub-applications may perform operations to replicate real-time simulations to determine performance metrics. For example, a sub-application of the test-computing job may be developed to determine a CPU performance. In some examples, a time taken for execution of the test-computing job may be measured to determine CPU performance/speed (performance metric); ¶ 27: measured performance metrics are received in response to the one or more test-computing jobs getting executed on each node … the test-computing job may be from a standard benchmark process. In other examples, a customized test-computing job may be created, which is customized to a particular system. Further, to check one or more linking performance metrics related to linkage between nodes, a ping-pong category of test-computing job can be run between the nodes). 15. Regarding claim 4, Bahramshahry and Strenski teach or suggest: “wherein to cause the two or more computer systems to be selected is further based, at least in part, on calculating a partition score associated with one or more logical partitions, wherein the partition score is based, at least in part, on one or more of a system state or task fitness” (Bahramshahry, ¶ 187: scheduler will independently identify any possible compute clouds capable of performing work (e.g., via the compute resource discovery engine); ¶ 375: previously non-selected sub-element 1517 is then placed into cloud A at element 1531, thus permitting all of workload Pl (element 1516) and one sub-part of workload P2 (element 1519) to execute within cloud A 1531 and the other two sub-parts of workload P2 (elements 1518 and 1519) to execute within cloud B 1532; Fig. 35 and ¶ 674: identifies, via a virtual capacity discovery engine, a plurality of virtual resources available to the scheduler in support of executing the workload tasks; the Examiner notes: a virtual resource is a logical grouping or partition of one or more (physical) resources virtualized, managed, and allocated as a single resource; Strenski, ¶ 27: measured performance metrics are received in response to the one or more test-computing jobs getting executed on each node … the test-computing job may be from a standard benchmark process ….; ¶ 28: fetch, decode, and execute the instructions to record in a database the measured performance metrics received from each node. Based on the one or more performance metrics, which provides the current/actual performance metrics, the nodes can be sorted). 16. Regarding claim 5, Bahramshahry and Strenski teach or suggest: “wherein the two or more computer systems’ ability to perform the two or more portions at substantially the same performance is based, at least in part, on one or more of a preferred node topology or a number of nodes” (Bahramshahry, ¶¶ 412–413: resulting calculated values 1715 for the efficiency of distribution are then analyzed, permitting the scheduling service to either select the “optimum” efficiency value (operation 1720) and thus the corresponding distribution quantity within the range of distribution or alternatively permitting the scheduling service to compare deltas between each of the respective distribution quantities and then select the distribution which results in the fewest VMs ( operation 1725), depending on the criteria and priorities for the workload. … the range of distribution (e.g., how many sub-parts a distributable workload may be fragmented into) is configurable by a user or administrator while in other embodiments, the scheduler determines its own range of distribution permissible based on other criteria, such as available compute resources, expected or known overhead associated with each additional workload sub-part, etc.; Strenski, ¶ 29: scheduler node selects a group of nodes having substantially similar linking performance metrics, when a category of request involves one or more of networking, communication, and linking between nodes; ¶ 32: scheduler node gathers information about the cluster of nodes 450, which includes but limited to, the number of nodes in the cluster, rated-performance metrics of the nodes, the configuration of the nodes, etc.). 17. Regarding claim 6, Bahramshahry and Strenski teach or suggest: “wherein the two or more computer systems’ ability to perform the two or more portions at substantially the same performance level is based, at least in part, on one or more requirements of the one or more programs” (Bahramshahry, ¶ 81: The scheduling service 145 is further enabled to prioritize resource allocation according to need for any given type of workload with the specified QoS of the workload provided in the form of a Service Level Target (SLT). An exemplary SLT may define the 95th percentile expected completion time and resource usage for a given task; ¶ 382: workload may be broken up, divided, distributed, or split among multiple workers, and if so, how such distribution may occur. For instance, the scheduler may dynamically determine based on the particular workload's SLT and available capacity that horizontally scaling the workload amongst a higher number of workers will achieve the service level target and/or optimize for time and work quality; Strenski, ¶ 42: fourth sub-list 464 may have the nodes 451A-451K sorted based on two performance metrics 474. If a computational job requires both processing and networking performance, then the fourth sub-list 464 may be selected by the scheduler node. In yet another example, when two performance metrics are considered, each performance metric may be considered in a pre-defined proportion/weightage). 18. Regarding claim 7, Bahramshahry and Strenski teach or suggest: “cause the two or more computer systems to perform the two or more portions of the one or more programs in parallel” (Bahramshahry, ¶ 357: parallel computation of sub-parts of a given workload, requires splitting a workload among a set of processing units called workers. The determination of how to split such a workload and to where the sub-parts are to be distributed is performed by the scheduler; ¶ 391: to produce the workloads or workload sub-parts which are to be ultimately executed; Strenski, ¶ 43: The set of nodes out of the cluster of nodes would be the resources selected for executing the one or more computational jobs). 19. Regarding claims 8 and 10–14, they are the corresponding system claims reciting similar limitations of commensurate scope as the apparatus (processor) of claims 1 and 3–7, respectively. Therefore, they are rejected on the same basis as claims 1 and 3–7 above, including the following rationale: Bahramshahry teaches or suggests “one or more processors” (Fig. 8 and ¶ 265: Processor 802 represents one or more general-purpose processing devices such as a microprocessor, central processing unit … Processor 802 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC); 20. Regarding claims 15 and 17–20, they are the corresponding method claims reciting similar limitations of commensurate scope as the apparatus of claims 1, 5, 3, 4, and 7, respectively. Therefore, they are rejected on the same basis as claims 1, 5, 3, 4, and 7 above. B. 21. Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Bahramshahry in view of (B) Strenski, as applied to claims 1, 8, and 15 above, and further in view of (C) Pyla. 22. Regarding claim 2, Bahramshahry and Strenski teach or suggest “one or more tags associated with the one or more available computer systems” (Bahramshahry, ¶ 508: virtual capacity discovery engine 2470 utilizes an API or plug-ins which continuously monitor availability of virtual resources from vendors, service providers, or other parties specifying the availability of such virtual resources and in turn continuously updates the local view with the information about availability of such virtual resources which then permits the scheduler to allocate available virtual resources based on the information written into the local view; Strenski, ¶ 26: One or more performance metrics are used to determine the current/actual performance of each node in the cluster of nodes … may include, but not limited to, an actual processing speed, a storage capacity, actual memory availability and read/write speed, a networking speed). Bahramshahry and Strenski do not teach “identifying one or more logical partitions grouping one or more available computer systems into at least one homogenous grouping based, at least in part, on one or more tags associated with the one or more available computer systems.” (C) Pyla teaches or suggests “identifying one or more logical partitions grouping one or more available computer systems into at least one homogenous grouping based, at least in part, on one or more tags associated with the one or more available computer systems” (¶ 74: two types of application marketplaces: one accessible by the entire data center and one for each tenant ( called "logical partition" in this disclosure) of the data center. The data center administrator might host and grant permissions to specific templates for use in one or more logical partitions. Logical partition administrators can use either templates permissioned to them by the data center administrator or create their own templates. The latter might be visible and accessible to members of that logical partition alone. Logical partition administrators may create environments from scratch or instantiate from existing environment templates; ¶ 61: Analysis module 285 may define an environment as the set of resources needed to run a particular application workload. AN ENVIRONMENT MAY BE ORGANIZED INTO SETS OF HOMOGENOUS SERVER INSTANCES, each set composed to support a specific sub-function of the application workload; ¶ 73: Composition is the process by which analysis module 285 and/or controller 270 builds a server to the specification defined by the composition profile, within the constraints imposed by the resources available in a pool of resources; ¶ 76: the data center administrator can review physical metadata of hardware, automatically identified during the discovery process, and, optionally, provide additional metadata in the form of key-value pairs to describe those assets. The key-value tags may be assigned and scoped at an individual object level or at an aggregate object type level; ¶ 78: Controller 270 may provide administrators with the capability to partition their resources to multiple tenants. Once resources are allocated to these tenants, each called a Logical Partition …. Controller 270 provides the data center administrator role with the capability to partition server, storage, GPU, and networking resources into Logical Partitions. In some examples, servers (and GPUs) are hard partitioned, which may mean that a single server can be assigned, in whole, to one and only one tenant. Infrastructure definitions and metadata are used as criteria to filter, select, and allocate these servers. Storage resources are allocated from liquid pools. Storage capacity and QoS limits are allocated to Logical Partitions enabling tenants to compose disks of arbitrary sizes as long as their aggregate utilization is within those allocations). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (C) Pyla with those of Bahramshahry and Strenski to provide for a logical grouping and composition of virtual resources based on resource availability, specification, and/or performance. The motivation or advantage to do so is to provide and support multi-tenant workload request, service, and execution. 23. Regarding claim 9, it is the corresponding system claim reciting similar limitations of commensurate scope as the apparatus (processor) of claim 2. Therefore, it is rejected on the same basis as claim 2 above. 24. Regarding claim 16, it is the corresponding method claim reciting similar limitations of commensurate scope as the apparatus of claim 2. Therefore, it is rejected on the same basis as claim 2 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN C WU whose telephone number is (571)270-5906. The examiner can normally be reached Monday through Friday, 8:30 A.M. to 5:00 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J. Li can be reached on (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENJAMIN C WU/Primary Examiner, Art Unit 2195 February 14, 2026
Read full office action

Prosecution Timeline

Feb 10, 2023
Application Filed
Feb 14, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602258
INSTANTIATING SOFTWARE DEFINED STORAGE NODES ON EDGE INFORMATION HANDLING SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12585508
RECONSTRUCTING AND VERIFYING PROPRIETARY CLOUD BASED ON STATE TRANSITION
2y 5m to grant Granted Mar 24, 2026
Patent 12579006
SYSTEMS AND METHODS FOR UNIVERSAL AUTO-SCALING
2y 5m to grant Granted Mar 17, 2026
Patent 12572388
COMPUTING RESOURCE SCHEDULING BASEDON EXPECTED CYCLES
2y 5m to grant Granted Mar 10, 2026
Patent 12566646
Accessing Critical Resource in a Non-Uniform Memory Access (NUMA) System
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+16.4%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 522 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month