DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claims 1-20 are pending for examination. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1, Statutory Category : Yes , the claim 1 is a method that recites a series of steps and therefore falls in the statutory category of a process. Step 2A- Prong 1: Judicial Exception Recited : Yes , the claim recites: “ generating , at a first time, MLA forecast data based on the one or more execution queries or execution of the one or more MLAs; generating , for each one of the one or more MLAs, performance indicators by comparing the MLA forecast data of each respective MLA to execution queries for the respective MLA or current execution of the respective MLA at a second time, the second time being later than the first time ” As drafted, the claim as a whole recites a method including steps that could be performed in the human mind, but for the recitation of generic computing components. The human mind can easily judging/evaluating/determining/generating MLA forecast data based on the execution queries, and determining/generating the performance indicators by comparing/evaluating the MLA forecast data of each respective MLA to execution queries for the respective MLA or current execution of the respective MLA at a second time. Therefore, but for the recitation of generic computing components, these steps may be a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion). Therefore, yes, the claims do recite judicial exceptions. Step 2A- Prong 2: Integrated into a practical Application: No , this judicial exception is not integrated into a practical application. In particular, the claim recites an additional limitations that “ receiving one or more execution queries to execute the one or more MLAs; ” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g)). In addition, “method for generating an orchestrating model configured to orchestrate a memory allocation of a machine learning algorithm (MLA)-dedicated memory communicably connected to a computing unit, the computing unit being configured to execute one or more MLAs deployed in the MLA-dedicated memory, the computing unit being communicably connected to an MLA database storing the one or more MLAs” is an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). Moreover, the limitation of “causing the computing unit to execute the one or more MLAs based on the one or more execution queries” which is merely applying the judicial exception or abstract idea (See MPEP 2106.05(f)) The claim does not define any particular machine to “cause” this “execute the one or more MLAs,” other than a generic machine such as the “computing unit,” and no details what so ever on how the claimed function will occur. Furthermore, the limitation of “updating the orchestrating model based on the performance indicators” which is insignificant extra-solution activity and merely data storing (see MPEP § 2106.05(g)). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to the abstract idea. Step 2B: Claim provides an Inventive Concept: No. The additional element “method for generating an orchestrating model configured to orchestrate a memory allocation of a machine learning algorithm (MLA)-dedicated memory communicably connected to a computing unit, the computing unit being configured to execute one or more MLAs deployed in the MLA-dedicated memory, the computing unit being communicably connected to an MLA database storing the one or more MLAs” is an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). In addition, the limitation of “causing the computing unit to execute the one or more MLAs based on the one or more execution queries” which is merely applying the judicial exception or abstract idea (See MPEP 2106.05(f)) The claim does not define any particular machine to “cause” this “execute the one or more MLAs,” other than a generic machine such as the “computing unit,” and no details what so ever on how the claimed function will occur. Moreover, the limitation of “ receiving one or more execution queries to execute the one or more MLAs; ” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g)). And the limitation of “ updating the orchestrating model based on the performance indicators” which is insignificant extra-solution activity and merely data storing (see MPEP § 2106.05(g)) and they are well understood, routine, conventional activity (see MPEP § 2106.05(d)). Courts have identified “ receiving and transmitting data, storing and retrieving information”, et cetera as well understood, routine, conventional and mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f))). These additional elements and combination of the elements does not amount to significant more than the exception itself or provide an inventive concept in Step 2B. Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the “ receiving ” and “ updating ” steps were considered to be extra-solution activity in Step 2A as insignificant data gathering and storing and are well understood, routine, conventional activity in the field. The “ receiving ” steps are for the purpose of “communication” and “transmitting the data” and these can be reached on one of court case (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) see MPEP § 2106.05(d) II). Additionally, the “ updating ” is for purpose of merely data storing, and this can be reached on one of court case ( Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93 ; see MPEP §2106.05(d)(II) iv . ). Accordingly, a conclusion that the “ receiving ” and “ updating ” steps are well understood, routine, conventional activity is supported under Berkheimer options 2 For these reasons, there is no inventive concept in the claim, and thus the claim is ineligible . Independent claims 16 and 19 are rejected for the same reason as claim 1 above. Claim 16 recites “A system comprising: at least one processor” which is direct to generic computing components/functions (MPEP § 2106.05(b). In addition, claim 16 recites “ generate, based on the one or more execution queries, a first orchestrating model configured to orchestrate the (MLA)-dedicated memory” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim 16 recites the limitation “execute the one or more MLAs based on the one or more execution queries and the second orchestrating model” which is merely applying the judicial exception or abstract idea (See MPEP 2106.05(f)). The claim does not define any particular machine to “execute the one or more MLAs,” other than a generic machine such as the “processor” and no details what so ever on how the claimed function will occur . Claim 19 further recites “A non-transitory computer-readable medium comprising a plurality of executable instructions which, when executed by at least one processor, cause the at least one processor to”. These additional elements are directed to generic computing components/functions merely applying the abstract idea (MPEP § 2106.05(f)) . With respect to the dependent claim 2, the claim elaborates that subsequent to causing the computing unit to execute a given MLA, detecting an end of the execution of the given MLA (“detecting an end of the execution of the given MLA” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)). With respect to the dependent claim 3, the claim elaborates that subsequent to detecting the end of the execution of the given MLA, discarding the given MLA from the MLA-dedicated memory (“discarding the given MLA from the MLA-dedicated memory” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind). With respect to the dependent claim 4, the claim elaborates that wherein the given MLA is associated with an MLA category in the MLA database, the MLA category being indicative of discarding instructions to be executed to discard the given MLA from the MLA-dedicated memory (these limitations are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). With respect to the dependent claim 5, the claim elaborates that wherein the discarding instructions comprise a pre-determined time duration, the method further comprising, subsequent to detecting the end of the execution of the given MLA: triggering a counter indicative of an amount of time that has passed since the end of the execution of the given MLA has been detected, and wherein discarding, the given MLA from the MLA-dedicated memory comprises: in response to the counter reaching the pre-determined time duration, discarding the given MLA from the MLA-dedicated memory (“triggering a counter indicative…” and “in response to the counter reaching the pre-determined time duration, discarding” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). With respect to the dependent claim 6, the claim elaborates that wherein the given MLA is a first MLA, the MLA category being further indicative of a priority level of the first MLA, and wherein discarding the given MLA from the MLA-dedicated memory is made in response to determining that a second MLA is to be deployed in the MLA-dedicated memory, the second MLA having a higher priority level than a priority level of the first MLA (these limitations are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). With respect to the dependent claim 7, the claim elaborates that wherein: a first MLA category corresponds to a first pre-determined time duration, the first pre- determined duration being strictly positive; and a second MLA category corresponds to a second pre-determined time duration, the second pre-determined duration being zero (these limitations are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). With respect to the dependent claim 8, the claim elaborates that wherein each MLA of the MLA database is associated with an MLA category and a priority level, the MLA category being indicative of discarding instructions to be executed subsequent to an execution thereof; a first MLA category is associated with discarding instructions which, upon being executed, cause an MLA of the first MLA category to be maintained in the MLA-dedicated memory; a second MLA category is associated with discarding instructions which, upon being executed, cause an MLA of the second MLA category to be discarded from the MLA-dedicated memory once an execution thereof has ended; a third MLA category is associated with discarding instructions which, upon being executed, cause: a timer to be triggered once an execution of an MLA of the third MLA category has ended, the timer having a pre-determined value for each MLA of the third category, the timer being reset in response to the MLA being further executed and further triggered once the new execution has ended, and the MLA of the third MLA category to be discarded from the MLA-dedicated memory once the timer has reached the pre-determined value and in response to an MLA having a higher priority level is to be deployed in the MLA-dedicated memory; and a fourth MLA category is associated with discarding instructions which, upon being executed, cause an MLA of the fourth MLA category to be discarded from the MLA-dedicated memory in response to a determination that an MLA having a higher priority level is to be deployed in the MLA-dedicated memory (these limitations are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). With respect to the dependent claim 9, the claim elaborates that partitioning computer resources of the computing unit into a plurality of resource pools; and extracting, from the one or more execution queries, information about a number of resource pools required to execute the one or more MLAs.(“partitioning” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). In addition, “extracting, from the one or more execution queries, information about a number of resource pools required” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)). With respect to the dependent claim 10, the claim elaborates that wherein: causing the computing unit to execute the one or more MLAs based on the one or more execution queries comprises determining an execution runtime of each of the one or more MLAs; receiving one or more execution queries to execute the one or more MLAs comprises determining, for each MLA, a desired execution time of the MLA; and MLA forecast data associated with a given MLA is based at least in part on the execution runtime of the given MLA and at least in part on the desired execution time. (“causing the computing unit to execute the one or more MLAs” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). In addition, “receiving” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g)). Further, “determining, for each MLA, a desired execution time of the MLA; and MLA forecast data” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind). With respect to the dependent claim 11, the claim elaborates that wherein generating the MLA forecast data comprises determining, for each MLA of the one or more MLAs, data indicative of an expected usage, by the computing unit, of the corresponding MLA (“determining” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind). With respect to the dependent claim 12, the claim elaborates that wherein each MLA is associated with an MLA category, the MLA category being indicative of instructions to be executed by a controller to discard a given MLA. (these limitations are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). With respect to the dependent claim 13, the claim elaborates that wherein the instructions comprise a pre-determined time duration, the method further comprising: detecting an end of the execution of the given MLA: triggering, by the controller, a counter indicative of an amount of time that has passed since the end of the execution of the given MLA has been detected; and in response to the counter reaching the pre-determined time duration, discarding, by the controller, the given MLA from the MLA-dedicated memory (“detecting an end of the execution” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. In addition, “triggering” and “discarding” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). With respect to the dependent claim 14, the claim elaborates that wherein generating the MLA forecast data comprises: determining a number of MLA execution queries for the one or more MLAs; and generating the MLA forecast data based on the number of MLA execution queries for the one or more MLAs (“determining” and “generating” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind). With respect to the dependent claim 15, the claim elaborates that wherein generating the MLA forecast data comprises: determining, for each of the one or more MLAs, whether the respective MLA depends on any other MLA; and generating the MLA forecast data based on whether the one or more MLAs depend on other MLAs (“determining” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind). With respect to the dependent claim 17, the claim elaborates that wherein the instructions further cause the system to detect an end of the execution of an MLA of the one or more MLAs (these limitation are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). With respect to the dependent claim 18, the claim elaborates that wherein the instructions further cause the system to, after detecting the end of the execution of the MLA, delete the MLA from the MLA-dedicated memory (these limitation are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f). The does not provide any details on how these limitations are performed) With respect to the dependent claim 20, the claim elaborates that wherein the first orchestrating model comprises an indication of when each MLA of the one or more MLAs is to be executed (these limitation are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “ a computing unit ” in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Claim limitations “ a computing unit ” in claim 1 invokes 35 U.S.C. 112(f). The specification paragraph [0106] that discloses “ computing unit 400 may be implemented as conventional computer server . In an example of an embodiment of the present technology, each of the computing units 400 may be implemented as a Dell TM PowerEdge TM Server running the Microsoft TM Windows Server TM operating system. Needless to say, each of the computing units 400 may be implemented in any other suitable hardware, software, and/or firmware , or a combination thereof ” as performing corresponding structure. However, said computing unit without the detail about the means to accomplish the functions are not an adequate disclosure of corresponding structure (i.e., they are general purpose computer and they are not sufficient structure to be corresponding structure under 112(f). That is, the general purpose computer must be transformed into a specially programmed computer by way of an algorithm). MPEP § 2181(II)(B) specifically indicated that “ For a computer-implemented 35 U.S.C. 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function , or else the claim is indefinite under 35 U.S.C. 112(b) . See Net MoneyIN, Inc. v. Verisign. Inc., 545 F.3d 1359, 1367, 88 USPQ2d 1751, 1757 (Fed. Cir. 2008). See also In re Aoyama, 656 F.3d 1293, 1297, 99 USPQ2d 1936, 1939 (Fed. Cir. 2011) ("[W]hen the disclosed structure is a computer programmed to carry out an algorithm, ‘the disclosed structure is not the general purpose computer, but rather that special purpose computer programmed to perform the disclosed algorithm.’") (quoting WMS Gaming, Inc. v. Int’l Game Tech., 184 F.3d 1339, 1349, 51 USPQ2d 1385, 1391 (Fed. Cir. 1999))” and “ The corresponding structure is not simply a general purpose computer by itself but the special purpose computer as programmed to perform the disclosed algorithm . Aristocrat, 521 F.3d at 1333, 86 USPQ2d at 1239. Thus, the specification must sufficiently disclose an algorithm to transform a general purpose microprocessor to the special purpose computer ” Therefore, the claims (i.e., 1-15) are indefinite and is rejected under 35 U.S.C. 112(b). Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f); (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA), first paragraph, as failing to comply with the written description requirement. The claim 1 contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. As described above in 112(f) (i.e., computing unit ), the disclosure does not provide adequate structure to perform the claimed functions. The specification does not demonstrate that applicant has made an invention that achieves the claimed function because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. See MPEP § 2181(II)(B) “ When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under 35 U.S.C. 112(a) ”. Claims 2-15, they are depend on claim 1 and do not overcome the deficiencies thereof, therefore they are rejected for the same reason as claim 1 above. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-15 are rejected under 35 U.S.C. 112(b), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. As per claims 1-15: As described above in 112(f) (i.e., “a computing unit” without the detail about the means to accomplish the functions are not an adequate disclosure of corresponding structure . The MPEP § 2181(II)(B) specifically indicated that “ For a computer-implemented 35 U.S.C. 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function , or else the claim is indefinite under 35 U.S.C. 112(b) . See Net MoneyIN, Inc. v. Verisign. Inc., 545 F.3d 1359, 1367, 88 USPQ2d 1751, 1757 (Fed. Cir. 2008). See also In re Aoyama, 656 F.3d 1293, 1297, 99 USPQ2d 1936, 1939 (Fed. Cir. 2011) ("[W]hen the disclosed structure is a computer programmed to carry out an algorithm, ‘the disclosed structure is not the general purpose computer, but rather that special purpose computer programmed to perform the disclosed algorithm.’") (quoting WMS Gaming, Inc. v. Int’l Game Tech., 184 F.3d 1339, 1349, 51 USPQ2d 1385, 1391 (Fed. Cir. 1999))”. Therefore, the claims (i.e., 1-15) are indefinite and is rejected under 35 U.S.C. 112(b). Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 9, 11 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Eberlein et al. (US Pub. 2021/0182108 A1) in view of Sarkar et al. (US Patent. 7,360,697 B1) and further in view of KORN et al. (US Pub. 2020/0356774 A1) and McFarlane (US Pub. 2021/0005296 A1). Eberlein was cited in the IDS filed on 07/12/2023. As per claim 1, Eberlein teaches the invention substantially as claimed including A method for generating an orchestrating model configured to orchestrate a memory allocation connected to a computing unit, the computing unit being configured to execute one or more process , the computing unit being communicably connected to an process database storing the one or more process , the method comprising ( Eberlein, Fig. 6, 605 processor , 607 memory ; 606 database ; [0050] lines 13-26, Process P1-P3 is used to regularly retrain a prediction model used by Resource Consumption Prediction 308. The prediction model is updated with new workload statistics…Data from the Load statistics DB 306 is used to continuously update and train the prediction model associated with the Resource Consumption Prediction ; [0052] lines 1-5, A workload profile is computed by the Resource Consumption Prediction 308 for the software processes using workload data or a machine-learning prediction model (as orchestrating model) is trained to be able to assign an expected workload profile to a new software process; [0056] lines 1-8, Resource Consumption Prediction 308 returns a predicted load profile for the new software process, which includes parameter(s) for an in-memory database size or other criteria needed to execute the new software process (for example, a 100 GB in-memory database, 200 GB of RAM , and 3 CPU; [0058] lines 2-4, the software process needs to run on an in- memory database ) receiving one or more execution queries to execute the one or more process ( Eberlein , Fig. 5, 502, Receive a request to schedule a new software process; [0020] lines 4-5, queries executed can require additional RAM and CPU); causing the computing unit to execute the one or more process based on the one or more execution queries ( Eberlein, [0020] lines 4-5, queries executed can require additional RAM and CPU ; [0058] lines 1-8, The scheduler 310 starts the new software process in the managed landscape (for example, if the software process needs to run on an in-memory database, the software process is started with the configuration to connect to the in-memory database). The new software process executes on the Managed Landscape 304 and is monitored by the Load monitor 302) ; generating, at a first time, process forecast data based on the one or more execution queries or execution of the one or more process ( Eberlein, Fig. 5, 508, receive the workload resource prediction for the new software process; [0056] lines 1-10, The scheduler 310 requests that Resource Consumption Prediction 308 compute a “ predicted workload ” for the to-be-scheduled job. Resource Consumption Prediction 308 returns a predicted load profile for the new software process, which includes parameter(s) for an in-memory database size or other criteria needed to execute the new software process (for example, a 100 GB in-memory database, 200 GB of RAM, and 3 CPU ) ; generating, for each one of the one or more processes , performance indicators at a second time, the second time being later than the first time ( Eberlein, [0028] lines 1-12, Workload characteristics are needed in order to perform the mentioned balance between workload distribution and software process orchestration. For example, in some implementations, a database of key performance indicators (KPIs) (as performance indicators) can be created . The KPIs can be used to measure previous instances of a software process being scheduled and what workload requirements the software process consumes; [0029] lines 1-5, As previously mentioned, software process workload requirements can change between updates, releases, or modifications. The KPI database is updated regularly with actual performance statistics so that for each workload type, a clear understanding exists for demand associated with different workload types ) ; and updating the orchestrating model based on the performance indicators ( Eberlein, [0029] lines 1-5, As previously mentioned, software process workload requirements can change between updates, releases, or modifications. The KPI database is updated regularly with actual performance statistics so that for each workload type , a clear understanding exists for demand associated with different workload types; [0032] lines 7-8, workload profiles of resource allocation can be built and stored in the KPI database ; [0050] lines 6-28, The Load monitor 302 also calculates an average, maximum, and standard deviation of the resource usage data and stores the calculated data into the Load statistics DB 306), and P3 322 (resource consumption prediction 308 reads data from the load statistics DB 306 and writes data—e.g. a workload profile —to the load statistics DB 306), run continuously, asynchronously, and independently in the computing system 300. Process P1-P3 is used to regularly retrain a prediction model used by Resource Consumption Prediction 308. The prediction model is updated with new workload statistics . The Load monitor 302 monitors resource consumption of the Managed Landscape 304. Parameterized data received from the Managed Landscape 304 (for example, workload data associated with executing software processes and hardware data) is used by the Load monitor 302 to continuously update the Load statistics DB 306 (refer to FIG. 4, 402). Data from the Load statistics DB 306 is used to continuously update and train the prediction model associated with the Resource Consumption Prediction 308. The prediction model is used to predict workload consumption for a new software process). Eberlein fails to specifically teach when generating performance indicators, it is by comparing the MLA forecast data of each respective MLA to execution queries for the respective MLA or current execution of the respective MLA. However, Sarkar teaches when generating performance indicators, it is by comparing the process forecast data of each respective process to execution queries for the respective process or current execution of the respective process (Sarkar, Col 6, lines 24-26, define business metrics of KPI ( Key Performance Indicators ) that compare commitment/projection/ forecasts vs. actual; Please notes: forecast data of each respective process and execution queries for the respective process were taught by Eberlein, see Eberlein, [0056] lines 1-10, The scheduler 310 requests that Resource Consumption Prediction 308 compute a “ predicted workload ” for the to-be-scheduled job. Resource Consumption Prediction 308 returns a predicted load profile for the new software process, which includes parameter(s) for an in-memory database size or other criteria needed to execute the new software process (for example, a 100 GB in-memory database, 200 GB of RAM, and 3 CPU ). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Eberlein with Sarkar because Sarkar’s teaching of generating the KPI based on comparing the forecast data VS actual data would have provided Eberlein’ s system with the advantage and capability to allow the system to easily identifying the performance related to the processing request in order to later adjusting the resource allocation which improving the system performance and efficiency. Eberlein and Sarkar fail to specifically teach machine learning algorithm (MLA)-dedicated memory communicably connected to a computing unit, the MLAs deployed in the MLA-dedicated memory. However, KORN teaches machine learning algorithm (MLA)-dedicated memory communicably connected to a computing unit, the MLAs deployed in the MLA-dedicated memory ( KORN, Fig. 1, 110 processor, 130 memory, 132; [0028] lines 20-28, memory 130 may be a singular memory or a collection of distributed memories such as having memories and/or modules included with the various system elements where, for example, the machine learning algorithm 132 and a database 134 may have their own dedicated memories that collectively with other distributed memories of the system 100 are referred to as the memory 130 of the system 100). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Eberlein and Sarkar with KORN because KORN ’s teaching of machine learning algorithm may have their own dedicated memories would have provided Eberlein and Sarkar ’ s system with the advantage and capability to allow the system to easily managing the memory resources for the machine learning algorithm which improving the system performance and efficiency. Eberlein, Sarkar and KORN fail to specifically teach the process is machine learning algorithm , and the execution queries is to execute the one or more MLAs . However, McFarlane teaches the process is machine learning algorithm , and the execution queries is to execute the one or more MLAs (McFarlane, [0072] lines 1-6, machine learning module 120 may send a query request for acknowledging the type of analysis (e.g., appendicitis) for AI learning to execute machine learning algorithm). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Eberlein, Sarkar and KORN with McFarlane because McFarlane ’s teaching of send a query request for acknowledging the type of analysis for AI learning to execute machine learning algorithm would have provided Eberlein, Sarkar and KORN’ s system with the advantage and capability to allow the system to efficiently managing access of the data and processing the data to facilitate user's requirements (see McFarlane, [0004] “ efficiently managing access of the data and processing the data to facilitate user's requirements ”). As per claim 9, Eberlein, Sarkar, KORN and McFarlane teach the invention according to claim 1 above. Eberlein further teaches partitioning computer resources of the computing unit into a plurality of resource pools (Eberlein Fig. 1A,, 102, 104, 106, 108, CPU, RAM, IO, NET; each resources are partitioned, as different resource pools) ; and extracting, from the one or more execution queries, information about a number of resource pools required to execute the one or more MLAs (Eberlein, Fig. 1B; Fig. 5, 502; [0018] Workload sizing options are also available in cloud-computing environments, however, different applications types typically have different requirements , and it is difficult to predict which sizing options to reserve and dynamically requesting specific options are significantly more expensive than using reserved instances of a pre-selected option type. Therefore, general purpose machines are normally preferred that can be used for requirements of any application type. But using such generic hardware can be very inefficient, because when applications of the same type are run on a particular set of hardware, application density can be optimized until one parameter reaches a limit. Often here buffers associated with each parameter will be implemented to ensure smooth application operation, even during a dynamic peak load. Even if a parameter reaches its limit, other parameters may still have spare capacity (for example, RAM may be fully utilized, but CPU is still idle to a certain extent), so non-consumed resources can still be considered to be available. In some cases and regardless of consumption status, some resources are paid for, so the lack of consumption is needlessly wasteful ; [0055] requests particular data about the new software process from Process Repository 314 (for example, execution environment, minimum resource requirements , and tasks and associated parameters—refer to FIG. 4 406). As per claim 11, Eberlein, Sarkar, KORN and McFarlane teach the invention according to claim 1 above. Eberlein teaches wherein generating the process forecast data comprises determining, for each process of the one or more process , data indicative of an expected usage, by the computing unit, of the corresponding process (Eberlein, Fig. 5, 508, [0056] The scheduler 310 requests that Resource Consumption Prediction 308 compute a “predicted workload” for the to-be-scheduled job. Resource Consumption Prediction 308 returns a predicted load profile for the new software process, which includes parameter(s) for an in-memory database size or other criteria needed to execute the new software process (for example, a 100 GB in-memory database, 200 GB of RAM, and 3 CPU ). In addition, McFarlane teaches the process is machine learning algorithm ( MLAs) (McFarlane, [0072] lines 1-6, machine learning module 120 may send a query request for acknowledging the type of analysis (e.g., appendicitis) for AI learning to execute machine learning algorithm). As per claim 15, Eberlein, Sarkar, KORN and McFarlane teach the invention according to claim 1 above. Eberlein teaches wherein generating the process forecast data comprises: determining, for each of the one or more processes , whether the respective process depends on any other process ; and generating the process forecast data based on whether the one or more processes depend on other processes (Eberlein, [0054] when Scheduler 310 is requested to schedule a new software process, the scheduler 310 retrieves new software process identifying/description data from To be scheduled processes 312. The identifying data for the new software process includes dependencies /prerequisites for a software process to start. The dependencies /prerequisites are matched . For example, if software processes need to run in parallel, the processes are planned and a prediction is computed for all of the software processes). In addition, McFarlane teaches the process is machine learning algorithm ( MLAs) (McFarlane, [0072] lines 1-6, machine learning module 120 may send a query request for acknowledging the type of analysis (e.g., appendicitis) for AI learning to execute machine learning algorithm). Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Eberlein, Sarkar, KORN and McFarlane, as applied to claim 1 above, and further in view of GUPTA et al. (US Pub. 2021/0224197 A1). As per claim 2, Eberlein, Sarkar, KORN and McFarlane teach the invention according to claim 1 above. Eberlein teaches causing the computing unit to execute a given process (Eberlein, [0020] lines 4-5, queries executed can require additional RAM and CPU ; [0058] lines 1-8, The scheduler 310 starts the new software process in the managed landscape (for example, if the software process needs to run on an in-memory database, the software process is started with the configuration to connect to the in-memory database). The new software process executes on the Managed Landscape 304 and is monitored by the Load monitor 302). In addition, McFarlane teaches the process is MLA (McFarlane, [0072] lines 1-6, machine learning module 120 may send a query request for acknowledging the type of analysis (e.g., appendicitis) for AI learning to execute machine learning algorithm). Eberlein, Sarkar, KORN and McFarlane fail to specifically teach subsequent to causing the computing unit to execute , detecting an end of the execution of the given MLA. However, GUPTA teaches subsequent to causing the computing unit to execute , detecting an end of the execution of the given MLA (GUPTA, [0058] lines 3-6, determining that the host 102 is done with one or more workload jobs). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Eberlein, Sarkar, KORN and McFarlane with GUPTA because GUPTA ’s teaching of detecting an end of the execution would have provided Eberlein, Sarkar, KORN and McFarlane’ s system with the advantage and capability to allow the system to easily determining whether the MLA has been finished which improving the resource utilization and system performance (See GUPTA, [0058] “ the storage manager 130 trims each section of the memory subsystem 120 allocated to the one or more workload job” ) . As per claim 3, Eberlein, Sarkar, KORN, McFarlane and GUPTA teach the invention according to claim 2 above. GUPTA further teaches subsequent to detecting the end of the execution of the given MLA, discarding the given MLA from the MLA-dedicated memory (GUPTA, [0058] lines 5-10, determining that the host 102 is done with one or more wor