Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Regarding independent claims the limitations identify a pattern, predict a task request, calculate resource requirements, assigning resources to task request, as drafted, recites functions that, under its broadest reasonable interpretation, covers a function that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitations as cited above as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process.
Thus, these limitation falls within the “Mental Processes” grouping of abstract ideas under Prong 1.
Under Prong 2, this judicial exception is not integrated into a practical application. The claim recites the following additional limitations: memory, processor, and resources. The additional elements are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using generic computer, and/or mere computer components, MPEP 2106.05(f), and steps of deploying resources for the said task do nothing more than add insignificant extra solution activity to the judicial exception. Accordingly, the additional elements do not integrate the recited judicial exception into a practical application and the claim is therefore directed to the judicial exception. See MPEP 2106.05(g).
Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of memory, processor, and resources, amount to no more than mere instructions, or generic computer/computer components to carry out the exception. Furthermore, the limitations directed to acquiring and receiving requests the courts have identified mere data gathering is well-understood, routine and conventional activity. See MPEP 2106.05(d).
The recitation of generic computer instruction and computer components to apply the judicial exception, and mere data gathering do not amount to significantly more, thus, cannot provide an inventive concept. Accordingly, the claims are not patent eligible under 35 USC 101.
Regarding claim 2, 3, 4, 6, 9, 10, 11, 13, 16, 17, 19 the limitations calculating a requirement, detecting a schedule, scheduling a request, forecasting resources, prioritizing request, are functions that can be reasonably performed in the human mind, thus, additional mental process defined in the claims. The claim does not include any additional element, thus, no limitation that needs to be analyzed under prong 2 for practical application, or under step 2B for significantly more.
Regarding claim 5, 7, 12, 14, 18, 20 the limitations of scaling resources, balancing load, are nothing more than insignificant extra solution activity which is not a practical application under prong 2. Under step 2B, the courts of identified the generic function of gathering/storing data, the results of the judicial exception, is well-understood, routine and conventional activity. See MPEP 2106.05(d) - i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);
Claim Rejections - 35 USC §103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim/s 1-6, 8-13, 15-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oconnell (Pub. No. US 2023/0221986).
Claim 1, 8, 15, Oconnell teaches “a system, said system comprising: a memory; and a processor in communication with said memory ([0137] As shown in FIG. 6, computer system 12 in computing node 10 is shown in the form of a computing device. The components of computer system 12 may include, but are not limited to, one or more processor 16, a system memory 28, and a bus 18 that couples various system components including system memory 28 to processor 16.), said processor being configured to perform operations, said operations comprising: acquiring historical data; identifying a pattern in said historical data ([0041-0042] A training dataset can comprise a set of data associated to a transaction having been completed by CPU 1103 with use of working memory 1101. … Trained as described, predictive model 3002 is able to learn a relationship, e.g., between transaction identifiers and memory utilization, as well as the relationship between timing parameter values specifying time of day and the occurrence of a transaction, as well as the relationship between preceding request data and the occurrence of a certain transaction. Trained as described, predictive model 3002 is also able to learn the relationship between concurrent transactions and the impact of such concurrent transactions on memory utilization. For example, memory utilization attributes can be different when there are multiple concurrent transactions rather than a single transaction. Examiner notes, Oconnell may not explicitly teach the term pattern, however because associations are determined, for example utilization attributes depend upon a single vs multiple concurrent transactions, it would be obvious to one of ordinarily skilled in the art, Oconnell’s determination of attributes are patterns.); predicting a predicted task request based on said historical data and said pattern, wherein said predicted task request anticipates a task request ([0108] Referring again to the flowchart of FIG. 4, additional operations of memory manager 122 are described. Referring to FIG. 4, memory manager 122 can examine historical data that specifies to memory manager 122 what transactions/requests are about to arrive in the next period of time (for example, the next 5 minutes). The examination of a historical model can include training and querying of predictive models 3002 and 3004 as set forth herein. The historical data can be iteratively logged as memory manager 122 learns the various transactions that flow through and get executed in computing environment 100. This type of historical data is relatively easy to obtain, e.g., through various techniques such as parsing web server logs, Java EE server logs, monitoring agents that run inside Java EE servers, and the like. Transaction priorities may also be included in the historical data repository or read by memory manager 122 through other processes.); calculating predicted resource requirements for said predicted task request; allocating resources for said predicted task request; receiving said task request; assigning said allocated resources to said task request; and deploying said allocated resources for said task request ([0073] Referring to the scenario depicted in FIG. 5A, memory manager 122 can predict that future transaction A, which has not yet been invoked at computing node 10, can have a memory utilization requirement of 10 megabytes and can determine with reference to the scenario in FIG. 5A that the predicted free space of working memory 1101 greatly exceeds 10 MB. Therefore, in the scenario depicted in FIG. 5A, memory manager 122 can pre-allocate and reserve free space memory within working memory 1101 sufficient to support the processing of future transaction A.)”.
Claim 2, 9, 16, Oconnell teaches “the system of claim 1, said operations further comprising: calculating a predicted resource requirement for said predicted task request ([0073] Referring to the scenario depicted in FIG. 5A, memory manager 122 can predict that future transaction A, which has not yet been invoked at computing node 10, can have a memory utilization requirement of 10 megabytes and can determine with reference to the scenario in FIG. 5A that the predicted free space of working memory 1101 greatly exceeds 10 MB. Therefore, in the scenario depicted in FIG. 5A, memory manager 122 can pre-allocate and reserve free space memory within working memory 1101 sufficient to support the processing of future transaction A.)”.
Claim 3, 10, Oconnell teaches “the system of claim 2, said operations further comprising: detecting an existing resource allocation schedule; and scheduling said predicted task request in said existing resource allocation schedule ([0062] At block 1009, computing node 10 can perform predicting of the state of computing node 10 and of working memory 1101 during a subsequent time period T=TC+1, where TC is the current time. … Thus, at block 1009, computing node 10 is able to perform predicting so as to identify future active transactions which are not currently active, i.e., associated to request data not yet received by computing node 10 and having associated characterizing system calls not yet sent to operating system 120 by an application but which yet are predicted to be sent to operating system 120 at the described subsequent time period T=TC+1. In some embodiments, operating system 120 iterative querying of instances of predictive model at block 1012 can provide functions in the performance of predicting at predicting block 1009.)”.
Claim 4, 11, 17, Oconnell teaches “the system of claim 2, said operations further comprising: forecasting available resources during a future deployment time window; and scheduling said predicted task request based on said available resources ([0110] Still referring to the flowchart of FIG. 4, memory manager 122 can, e.g., examine live and dead data objects for determination of an amount of free space memory that can be made available. For example, the Java heap has live and dead (garbage) objects. In one example, free space memory that can be made available can be calculated in the mark phase. Memory manager 122 can check if enough free space memory in working memory 1101 will be available. If the answer is yes, memory manager 122 can check whether working memory 1101 can be cleaned up now (block 4014). If working memory 1101 cannot be cleaned up now, memory manager 122 can hold a predicted incoming subsequent transaction (block 4026). Then, memory manager 122 can start over. [0111] If working memory 1101 can be cleaned up now, memory manager 122 can clean up working memory (block 4106), then reserve memory (block 418) for an incoming transaction.)”.
Claim 5, 12, 18 Oconnell teaches “the system of claim 1, said operations further comprising: scaling resources for said predicted task request ([0074] In the scenario depicted in FIG. 5B, according to the predicted future state of working memory 1101, there is memory space consumption by transaction X, transaction Y, and transaction Z. However, in the scenario depicted in FIG. 5B, memory manager 122 can determine that transaction X, transaction Y, and transaction Z are no longer needed, and therefore memory manager 122, in the scenario depicted in FIG. 5B, can determine that the memory stored data associated to transaction X, transaction Y, and transaction Z can be subject to cleanup to permit an increase in the free space memory allocation for future transaction A predicted to have a future memory utilization requirement of 10 MB.)”.
Claim 6, 13, 19, Oconnell teaches “the system of claim 1, said operations further comprising: prioritizing said task request among a set of tasks ([0103] In some embodiments, computing node 10 can restrict performance of a hold or pause, in dependence on an examination of priority levels assigned to transactions predicted to be active at the subsequent time, T=TC+1.)”.
Claim/s 7, 14, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oconnell (Pub. No. US 2023/0221986) in view of Liu (Pub. No. US 2023/0376347).
Claim 7, 14, 20, Oconnell may not explicitly teach the limitation.
Liu teaches “the system of claim 1, said operations further comprising: balancing a task load based on system resource availability, wherein said task load includes said task request and a set of tasks ([0090] In the embodiments of the present disclosure, multiple rounds of update on the candidate scheduling matrix are set. In each round of update, multiple scheduling paths are used to generate multiple simulated scheduling matrices based on the initial scheduling matrix of the current round, the candidate scheduling matrix with optimal balanced load is determined in the multiple simulated scheduling matrices, and next round of update is continued to be performed based on the candidate scheduling matrix. Through the scheduling matrices, multiple rounds of iterative updates are performed to obtain the target scheduling matrix with balanced load, such that task elements are allocated through the target scheduling matrix to make the resources of the allocated task elements on multiple node devices more balanced.)”.
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Liu with the teachings of Oconnell in order to provide a system that teaches load balancing. The motivation for applying Liu teaching with Oconnell teaching is to provide a system that allows for improved resource utilization. Oconnell, Liu are analogous art directed towards task processing. Together Oconnell, Liu teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Liu with the teachings of Oconnell by known methods before the effective filing date of the claimed invention and gained expected results.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WYNUEL S AQUINO whose telephone number is (571)272-7478. The examiner can normally be reached 9AM-5PM EST M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at 571-272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WYNUEL S AQUINO/Primary Examiner, Art Unit 2199