DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on XXXXXXXXXXXXXX has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Status of Claims Claims X are canceled. Claims X are amended. Claims X are new. Claims 1-20 are pending and have been examined. This action is in reply to the papers filed on 0 8 /03/20 23 (effective filing date 08/03/2023 ). Information Disclosure Statement No Information Disclosure Statement has been filed. The information disclosure statement(s) submitted: xxxxxxxx, has/have been considered by the Examiner and made of record in the application file. Amendment The present Office Action is based upon the original patent application filed on xxx as modified by the amendment filed on xxx. Reasons For Allowance Prior-Art Rejection withdrawn Claims xxx are allowed. The closest prior art (See PTO-892, Notice of References Cited) does not teach the claimed: The closest prior-art ( xxx) teach the features as disclosed in Non-final Rejection (xxxx), however, these cited references do not teach and the prior-art does not teach at least the following combination of features and/or elements : determining, at a second time after associating the information corresponding to the first loyalty card with the logged location, that a second user computing device is located within a specified distance of the logged location using a second positioning system of the second user computing device; in response to determining that the second user computing device is located within the specified distance of the logged location of the first user computing device at the first time of detecting: retrieving information corresponding to a second loyalty card, the second loyalty card being associated with the merchant and the second user computing device; and displaying, by the second user computing device, data describing the second loyalty card. Claim Rejections - 35 USC §101 - Withdrawn Per Applicant’s amendments and arguments and considering new guidance in the MPEP, the rejections are withdrawn. Specifically, in Applicant’s Remarks (dated 03/14/2017, pgs. 8-11), Applicant traverses the 35 USC § 101 rejections arguing that the amended claims recite new limitations that are not abstract, amount to significantly more, are directed to a practical application, etc… For example, Applicant argues…. In support of their arguments, Applicant cites to the following recent Fed. Cir. court cases (i.e., Alice Corp. v. CLS Bank Int’l, SRI Int’l, Inc. v. Cisco Systems, Inc., Ultramercial, Inc. v. Hulu, LLC, Berkheimer , Core Wireless, McRO, Enfish, Bascom, DDR, etc…). Claim Rejections - 35 USC § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter because the claimed invention is directed to an abstract idea without significantly more. These claims recite a method and system for implementing multi-model inferencing frameworks and application programming interfaces . Claim 1 recites [ a ] method comprising: receiving, via a user application programming interface (API), configuration parameters for execution of a plurality of machine learning models (MLMs), wherein the configuration parameters identify: storage locations of the plurality of MLMs, storage locations of input data into the plurality of MLMs, and one or more inference applications for performing inference processing of the input data using the plurality of MLMs; configuring, using the received configuration parameters, the one or more inference applications to process the input data; executing, on one or more processing devices, the plurality of MLMs using the one or more inference applications to generate a plurality of sets of output data, wherein each MLM of the plurality of MLMs generates at least one set of output data of the plurality of sets of output data; and rendering, via the user API, a combined representation of the plurality of sets of output data. The claims are being rejected according to the 2019 Revised Patent Subject Matter Eligibility Guidance (Federal Register, Vol. 84, No. 5, p. 50-57 (Jan. 7, 2019)). Step 1: Does the Claim Fall within a Statutory Category? Yes. Claims 1- 10 recite a method and, therefore, are directed to the statutory class of a process. Claims 11-20 recite a system/ processor and, therefore, are directed to the statutory class of machine. Step 2A, Prong One: Is a Judicial Exception Recited? Yes. The following tables identify the specific limitations that recite an abstract idea. The column that identifies the additional elements will be relevant to the analysis in step 2A, prong two, and step 2B. Claim 1: Identification of Abstract Idea and Additional Elements, using Broadest Reasonable Interpretation Claim Limitation Abstract Idea Additional Element 1. A method comprising: No additional elements are positively claimed. receiving, via a user application programming interface (API), configuration parameters for execution of a plurality of machine learning models (MLMs), wherein the configuration parameters identify: storage locations of the plurality of MLMs, storage locations of input data into the plurality of MLMs, and one or more inference applications for performing inference processing of the input data using the plurality of MLMs; This limitation includes the step(s) of: receiving, via a user application programming interface (API), configuration parameters for execution of a plurality of machine learning models (MLMs), wherein the configuration parameters identify: storage locations of the plurality of MLMs, storage locations of input data into the plurality of MLMs, and one or more inference applications for performing inference processing of the input data using the plurality of MLMs . But for the API and/or processing device , this limitation is directed to processing and/or communicating known information to implement multi-model inferencing frameworks and application programming interfaces which may be categorized as any of the following: certain method of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk), and/or commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations). receiving, via a user application programming interface (API), configuration parameters for execution of a plurality of machine learning models (MLMs)… configuring, using the received configuration parameters, the one or more inference applications to process the input data; This limitation includes the step(s) of: configuring, using the received configuration parameters, the one or more inference applications to process the input data. No additional elements are positively claimed. But for the API and/or processing device, this limitation is directed to processing and/or communicating known information to implement multi-model inferencing frameworks and application programming interfaces which may be categorized as any of the following: certain method of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk), and/or commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) . No additional elements are positively claimed. executing, on one or more processing devices, the plurality of MLMs using the one or more inference applications to generate a plurality of sets of output data, wherein each MLM of the plurality of MLMs generates at least one set of output data of the plurality of sets of output data; and This limitation includes the step(s) of: executing, on one or more processing devices, the plurality of MLMs using the one or more inference applications to generate a plurality of sets of output data, wherein each MLM of the plurality of MLMs generates at least one set of output data of the plurality of sets of output data . But for the API and/or processing device, this limitation is directed to processing and/or communicating known information to implement multi-model inferencing frameworks and application programming interfaces which may be categorized as any of the following: certain method of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk), and/or commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) . executing, on one or more processing devices, the plurality of MLMs… rendering, via the user API, a combined representation of the plurality of sets of output data. This limitation includes the step(s) of: rendering, via the user API, a combined representation of the plurality of sets of output data . But for the API and/or processing device, this limitation is directed to processing and/or communicating known information to implement multi-model inferencing frameworks and application programming interfaces which may be categorized as any of the following: certain method of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk), and/or commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) . rendering, via the user API, a combined representation… As shown above, under Step 2A, Prong One, the claims recite a judicial exception (an abstract idea). The claims are directed to the abstract idea of implementing multi-model inferencing frameworks and application programming interfaces , which, pursuant to MPEP 2106.04, is aptly categorized as a method of organizing human activity. Therefore, under Step 2A, Prong One, the claims recite a judicial exception. Next, the aforementioned claims recite additional functional elements that are associated with the judicial exception, including: an API for communicating information . Examiner understands these limitations to be insignificant extrasolution activity. (See Accenture , 728 F.3d 1336, 108 U.S.P.Q.2d 1173 (Fed. Cir. 2013), citing Cf. Diamond v. Diehr , 450 U.S. 175, 191-192 (1981) ("[I] nsignificant post-solution activity will not transform an unpatentable principle in to a patentable process.”). The aforementioned claims also recite additional technical elements including: a “processor” or “processing device” to execute the method and system and an API for communicating data. These limitations are recited at a high level of generality and appear to be nothing more than generic computer components. Claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 134 S. Ct. at 2358, 110 USPQ2d at 1983. See also 134 S. Ct. at 2389, 110 USPQ2d at 1984. Step 2A, Prong Two: Is the Abstract Idea Integrated into a Practical Application? No. The judicial exception is not integrated into a practical application. The additional elements listed above that relate to computing components are recited at a high level of generality (i.e., as generic components performing generic computer functions such as communicating, receiving, processing, analyzing, and outputting/displaying data) such that they amount to no more than mere instructions to apply the exception using generic computing components. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. Additionally, the claims do not purport to improve the functioning of the computer itself. There is no technological problem that the claimed invention solves. Rather, the computer system is invoked merely as a tool. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, these claims are directed to an abstract idea. Furthermore, looking at the elements individually and in combination, under Step 2A, Prong Two, the claims as a whole do not integrate the judicial exception into a practical application because they fail to: improve the functioning of a computer or a technical field, apply the judicial exception in the treatment or prophylaxis of a disease, apply the judicial exception with a particular machine, effect a transformation or reduction of a particular article to a different state or thing, or apply the judicial exception beyond generally linking the use of the judicial exception to a particular technological environment. Rather, the claims merely use a computer as a tool to perform the abstract idea(s), and/or add insignificant extra-solution activity to the judicial exception, and/or generally link the use of the judicial exception to a particular technological environment. Step 2B: Does the Claim Provide an Inventive Concept? Next, under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, when considered both individually and as an ordered combination, do not amount to significantly more than the abstract idea. Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. Simply put, as noted above, there is no indication that the combination of elements improves the functioning of a computer (or any other technology), and their collective functions merely provide conventional computer implementation. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements relating to computing components amount to no more than applying the exception using a generic computing components . Mere instructions to apply an exception using a generic computing component cannot provide an inventive concept. Furthermore, the broadest reasonable interpretation of the claimed computer components (i.e., additional elements) includes any generic computing components that are capable of being programmed to communicate, receive, send, process, analyze, output, or display data. Furthermore, Applicant’s Specification ( PGPub . 2025/0045604 [0 140 ]) refers to a general computer system, but they do not include any technically-specific computer algorithm or code. Additionally, pursuant to the requirement under Berkheimer , the following citations are provided to demonstrate that the additional elements, identified as extra-solution activity, amount to activities that are well-understood, routine, and conventional. See MPEP 2106.05(d). Capturing an image (code) with an RFID reader. Ritter, US Patent No. 7734507 (Col. 3, Lines 56-67); “RFID: Riding on the Chip” by Pat Russo. Frozen Food Age. New York: Dec. 2003, vol. 52, Issue 5; page S22. Receiving or transmitting data over a network. Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362; OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE , Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014). Storing and retrieving information in memory. Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93. Outputting/Presenting data to a user. Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015); MPEP 2106.05(g)(3). Using a machine learning model to determine user segment characteristics for an ad campaign. https://whites.agency/blog/how-to-use-machine-learning-for-customer-segmentation/. Thus, taken alone and in combination, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea), and are ineligible under 35 USC 101. Independent system claim 11 and processor claim 20 also contains the identified abstract ideas, with the additional elements of a processor and storage medium, which are a generic computer components, and thus not significantly more for the same reasons and rationale above. Dependent claims 2- 10 and 12-19 further describe the abstract idea. The additional elements of the dependent claims fail to integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea. Thus, as the dependent claims remain directed to a judicial exception, and as the additional elements of the claims do not amount to significantly more, the dependent claims are not patent eligible. As such, the claims are not patent eligible. Therefore, the Office finds no improvements to another technology or field, no improvements to the function of the computer itself, and no meaningful limitations beyond generally linking the use of an abstract idea to a particular technological environment. Therefore, based on the two-part Alice Corp. analysis, there are no limitations in any of the claims that transform the exception (i.e., the abstract idea) into a patent eligible application. Claim Rejections - Not an Ordered Combination None of the limitations, considered as an ordered combination provide eligibility, because taken as a whole, the claims simply instruct the practitioner to implement the abstract idea with routine, conventional activity. Claim Rejections - Preemption Allowing the claims, as presently claimed, would preempt others from implementing multi-model inferencing frameworks and application programming interfaces . Furthermore, the claim language only recites the abstract idea of performing this method, there are no concrete steps articulating a particular way in which this idea is being implemented or describing how it is being performed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim s 1 -20 are rejected under 35 U.S.C. 103 as being unpatentable over: Mallya et al. 2021/0142177 ; in view of Kuo et al. 2019/0156246 . 18/229,929 – Claim 1. Mallya et al. 2021/0142177 teaches A method comprising: receiving, via a user application programming interface (API) ( Mallya et al. 2021/0142177 [0079 - requests may be received through a user interface] In at least one embodiment, at a subsequent point in time, a request may be received from client device 602 (or another such device) for content (e.g., path determinations) or data that is at least partially determined or impacted by a trained neural network. This request can include, for example, input data to be processed using a neural network to obtain one or more inferences or other output values, classifications, or predictions. In at least one embodiment, input data can be received to interface layer 608 and directed to inference module 618 , although a different system or service can be used as well. In at least one embodiment, inference module 618 can obtain an appropriate trained network, such as a trained deep neural network (DNN) as discussed herein, from model repository 616 if not already stored locally to inference module 618 . Inference module 618 can provide data as input to a trained network, which can then generate one or more inferences as output. This may include, for example, a classification of an instance of input data. In at least one embodiment, inferences can then be transmitted to client device 602 for display or other communication to a user. In at least one embodiment, context data for a user may also be stored to a user context data repository 622 , which may include data about a user which may be useful as input to a network in generating inferences, or determining data to return to a user after obtaining instances. In at least one embodiment, relevant data, which may include at least some of input or inference data, may also be stored to a local database 620 for processing future requests. In at least one embodiment, a user can use account or other information to access resources or functionality of a provider environment. In at least one embodiment, if permitted and available, user data may also be collected and used to further train models, in order to provide more accurate inferences for future requests. In at least one embodiment, requests may be received through a user interface to a machine learning application 626 executing on client device 602 , and results displayed through a same interface. A client device can include resources such as a processor 628 and memory 630 for generating a request and processing results or a response, as well as at least one data storage element 632 for storing data for machine learning application 626 . [0128 - graphical user interfaces] FIG. 12A is a block diagram illustrating an exemplary computer system, which may be a system with interconnected devices and components, a system-on-a-chip (SOC) or some combination thereof 1200 formed with a processor that may include execution units to execute an instruction, according to at least one embodiment. In at least one embodiment, computer system 1200 may include, without limitation, a component, such as a processor 1202 to employ execution units including logic to perform algorithms for process data, in accordance with present disclosure, such as in embodiment described herein. In at least one embodiment, computer system 1200 may include processors, such as PENTIUM® Processor family, Xeon™, Itanium®, XScale™ and/or StrongARM™, Intel® Core™, or Intel® Nervana™ microprocessors available from Intel Corporation of Santa Clara, Calif., although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and like) may also be used. In at least one embodiment, computer system 1200 may execute a version of WINDOWS' operating system available from Microsoft Corporation of Redmond, Wash., although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. [0135 - user input and keyboard interfaces] In at least one embodiment, computer system 1200 may use system I/O 1222 that is a proprietary hub interface bus to couple MCH 1216 to I/O controller hub (“ICH”) 1230 . In at least one embodiment, ICH 1230 may provide direct connections to some I/O devices via a local I/O bus. In at least one embodiment, local I/O bus may include, without limitation, a high-speed I/O bus for connecting peripherals to memory 1220 , chipset, and processor 1202 . Examples may include, without limitation, an audio controller 1229 , a firmware hub (“flash BIOS”) 1228 , a wireless transceiver 1226 , a data storage 1224 , a legacy I/O controller 1223 containing user input and keyboard interfaces 1225 , a serial expansion port 1227 , such as Universal Serial Bus (“USB”), and a network controller 1234 . data storage 1224 may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device. [0393 - a request may be received by a set of API…] In at least one embodiment, shared storage may be mounted to AI services 4718 within system 4700 . In at least one embodiment, shared storage may operate as a cache (or other storage device type) and may be used to process inference requests from applications. In at least one embodiment, when an inference request is submitted, a request may be received by a set of API instances of deployment system 4606 , and one or more instances may be selected (e.g., for best fit, for load balancing, etc.) to process a request. In at least one embodiment, to process a request, a request may be entered into a database, a machine learning model may be located from model registry 4624 if not already in a cache, a validation step may ensure appropriate machine learning model is loaded into a cache (e.g., shared storage), and/or a copy of a model may be saved to a cache. In at least one embodiment, a scheduler (e.g., of pipeline manager 4712 ) may be used to launch an application that is referenced in a request if an application is not already running or if there are not enough instances of an application. In at least one embodiment, if an inference server is not already launched to execute a model, an inference server may be launched. Any number of inference servers may be launched per model. In at least one embodiment, in a pull model, in which inference servers are clustered, models may be cached whenever load balancing is advantageous. In at least one embodiment, inference servers may be statically loaded in corresponding, distributed servers. ) , configuration parameters ( Mallya et al. 2021/0142177 [0100 - hyperparameter configurations] In at least one embodiment, instances of a dataset can be embedded into a lower dimensional space of a certain size during pre-processing. In at least one embodiment, a size of this space is a parameter to be tuned. In at least one embodiment, an architecture of a CNN contains many tunable parameters. A parameter for filter sizes can represent an interpretation of information that corresponds to a size of an instance that will be analyzed. In computational linguistics, this is known as an n-gram size. An example CNN uses three different filter sizes, which represent potentially different n-gram sizes. A number of filters per filter size can correspond to a depth of a filter. Each filter attempts to learn something different from a structure of an instance, such as a sentence structure for textual data. In a convolutional layer, an activation function can be a rectified linear unit and a pooling type set as max pooling. Results can then be concatenated into a single dimensional vector, and a last layer is fully connected onto a two-dimensional output. This corresponds to a binary classification to which an optimization function can be applied. One such function is an implementation of a Root Mean Square (RMS) propagation method of gradient descent, where example hyperparameters can include learning rate, batch size, maximum gradient normal, and epochs. With neural networks, regularization can be an extremely important consideration. In at least one embodiment input data may be relatively sparse. A main hyperparameter in such a situation can be a dropout at a penultimate layer, which represents a proportion of nodes that will not “fire” at each training cycle. An example training process can suggest different hyperparameter configurations based on feedback for a performance of previous configurations. This model can be trained with a proposed configuration, evaluated on a designated validation set, and performance reporting. This process can be repeated to, for example, trade off exploration (learning more about different configurations) and exploitation (leveraging previous knowledge to achieve better results). [0101 - configuration parameters] As training CNNs can be parallelized and GPU-enabled computing resources can be utilized, multiple optimization strategies can be attempted for different scenarios. A complex scenario allows tuning model architecture and preprocessing and stochastic gradient descent parameters. This expands a model configuration space. In a basic scenario, only preprocessing and stochastic gradient descent parameters are tuned. There can be a greater number of configuration parameters in a complex scenario than in a basic scenario. Tuning in a joint space can be performed using a linear or exponential number of steps, iteration through an optimization loop for models. A cost for such a tuning process can be significantly less than for tuning processes such as random search and grid search, without any significant performance loss. ) for execution of a plurality of machine learning models (MLMs) ( Mallya et al. 2021/0142177 [0376 – execute machine learning models] In at least one embodiment, where a service 4620 includes an AI service (e.g., an inference service), one or more machine learning models associated with an application for anomaly detection (e.g., tumors, growth abnormalities, scarring, etc.) may be executed by calling upon (e.g., as an API call) an inference service (e.g., an inference server) to execute machine learning model(s), or processing thereof, as part of application execution. In at least one embodiment, where another application includes one or more machine learning models for segmentation tasks, an application may call upon an inference service to execute machine learning models for performing one or more of processing operations associated with segmentation tasks. In at least one embodiment, software 4618 implementing advanced processing and inferencing pipeline that includes segmentation application and anomaly detection application may be streamlined because each application may call upon a same inference service to perform one or more inferencing tasks. [0376; 0392; 0398 – execute machine learning modesl ] ) , wherein the configuration parameters identify ( Mallya et al. 2021/0142177 [0101 - configuration parameters] As training CNNs can be parallelized and GPU-enabled computing resources can be utilized, multiple optimization strategies can be attempted for different scenarios. A complex scenario allows tuning model architecture and preprocessing and stochastic gradient descent parameters. This expands a model configuration space. In a basic scenario, only preprocessing and stochastic gradient descent parameters are tuned. There can be a greater number of configuration parameters in a complex scenario than in a basic scenario. Tuning in a joint space can be performed using a linear or exponential number of steps, iteration through an optimization loop for models. A cost for such a tuning process can be significantly less than for tuning processes such as random search and grid search, without any significant performance loss. ) : storage locations of the plurality of MLMs ( Mallya et al. 2021/0142177 [0079 - data storage element 632 for storing data for machine learning application] In at least one embodiment, at a subsequent point in time, a request may be received from client device 602 (or another such device) for content (e.g., path determinations) or data that is at least partially determined or impacted by a trained neural network. This request can include, for example, input data to be processed using a neural network to obtain one or more inferences or other output values, classifications, or predictions. In at least one embodiment, input data can be received to interface layer 608 and directed to inference module 618 , although a different system or service can be used as well. In at least one embodiment, inference module 618 can obtain an appropriate trained network, such as a trained deep neural network (DNN) as discussed herein, from model repository 616 if not already stored locally to inference module 618 . Inference module 618 can provide data as input to a trained network, which can then generate one or more inferences as output. This may include, for example, a classification of an instance of input data. In at least one embodiment, inferences can then be transmitted to client device 602 for display or other communication to a user. In at least one embodiment, context data for a user may also be stored to a user context data repository 622 , which may include data about a user which may be useful as input to a network in generating inferences, or determining data to return to a user after obtaining instances. In at least one embodiment, relevant data, which may include at least some of input or inference data, may also be stored to a local database 620 for processing future requests. In at least one embodiment, a user can use account or other information to access resources or functionality of a provider environment. In at least one embodiment, if permitted and available, user data may also be collected and used to further train models, in order to provide more accurate inferences for future requests. In at least one embodiment, requests may be received through a user interface to a machine learning application 626 executing on client device 602 , and results displayed through a same interface. A client device can include resources such as a processor 628 and memory 630 for generating a request and processing results or a response, as well as at least one data storage element 632 for storing data for machine learning application 626 . [0177 - memory location] In one embodiment, each WD 1984 is specific to a particular graphics acceleration module 1946 and/or graphics processing engines 1731 - 1732 , N (shown in FIG. 17). It contains all information required by a graphics processing engine 1731 - 1732 , N (shown in FIG. 17) to do work or it can be a pointer to a memory location where an application has set up a command queue of work to be completed. ) , storage locations of input data into the plurality of MLMs ( Mallya et al. 2021/0142177 [0368 - machine learning models may have been trained on imaging data from one location, two locations, or any number of locations] In at least one embodiment, training pipeline 4704 (FIG. 47) may include a scenario where facility 4602 needs a machine learning model for use in performing one or more processing tasks for one or more applications in deployment system 4606 , but facility 4602 may not currently have such a machine learning model (or may not have a model that is optimized, efficient, or effective for such purposes). In at least one embodiment, an existing machine learning model may be selected from a model registry 4624 . In at least one embodiment, model registry 4624 may include machine learning models trained to perform a variety of different inference tasks on imaging data. In at least one embodiment, machine learning models in model registry 4624 may have been trained on imaging data from different facilities than facility 4602 (e.g., facilities remotely located). In at least one embodiment, machine learning models may have been trained on imaging data from one location, two locations, or any number of locations. In at least one embodiment, when being trained on imaging data from a specific location, training may take place at that location, or at least in a manner that protects confidentiality of imaging data or restricts imaging data from being transferred off-premises (e.g., to comply with HIPAA regulations, privacy regulations, etc.). In at least one embodiment, once a model is trained—or partially trained—at one location, a machine learning model may be added to model registry 4624 . In at least one embodiment, a machine learning model may then be retrained, or updated, at any number of other facilities, and a retrained or updated model may be made available in model registry 4624 . In at least one embodiment, a machine learning model may then be selected from model registry 4624 —and referred to as output model 4616 —and may be used in deployment system 4606 to perform one or more processing tasks for one or more applications of a deployment system. [0391 - data in same location of a memory may be used for any number of processing tasks] In at least one embodiment, services 4620 leveraged by and shared by applications or containers in deployment system 4606 may include compute services 4716 , AI services 4718 , visualization services 4720 , and/or other service types. In at least one embodiment, applications may call (e.g., execute) one or more of services 4620 to perform processing operations for an application. In at least one embodiment, compute services 4716 may be leveraged by applications to perform super-computing or other high-performance computing (HPC) tasks. In at least one embodiment, compute service(s) 4716 may be leveraged to perform parallel processing (e.g., using a parallel computing platform 4730 ) for processing data through one or more of applications and/or one or more tasks of a single application, substantially simultaneously. In at least one embodiment, parallel computing platform 4730 (e.g., NVIDIA's CUDA) may enable general purpose computing on GPUs (GPGPU) (e.g., GPUs 4722 ). In at least one embodiment, a software layer of parallel computing platform 4730 may provide access to virtual instruction sets and parallel computational elements of GPUs, for execution of compute kernels. In at least one embodiment, parallel computing platform 4730 may include memory and, in some embodiments, a memory may be shared between and among multiple containers, and/or between and among different processing tasks within a single container. In at least one embodiment, inter-process communication (IPC) calls may be generated for multiple containers and/or for multiple processes within a container to use same data from a shared segment of memory of parallel computing platform 4730 (e.g., where multiple different stages of an application or multiple applications are processing same information). In at least one embodiment, rather than making a copy of data and moving data to different locations in memory (e.g., a read/write operation), same data in same location of a memory may be used for any number of processing tasks (e.g., at a same time, at different times, etc.). In at least one embodiment, as data is used to generate new data as a result of processing, this information of a new location of data may be stored and shared between various applications. In at least one embodiment, location of data and a location of updated or modified data may be part of a definition of how a payload is understood within containers. ) , and one or more inference applications ( Mallya et al. 2021/0142177 [0396 - inference applications] In at least one embodiment, transfer of requests between services 4620 and inference applications may be hidden behind a software development kit (SDK), and robust transport may be provide through a queue. In at least one embodiment, a request will be placed in a queue via an API for an individual application/tenant ID combination and an SDK will pull a request from a queue and give a request to an application. In at least one embodiment, a name of a queue may be provided in an environment from where an SDK will pick it up. In at least one embodiment, asynchronous communication through a queue may be useful as it may allow any instance of an application to pick up work as it becomes available. Results may be transferred back through a queue, to ensure no data is lost. In at least one embodiment, queues may also provide an ability to segment work, as highest priority work may go to a queue with most instances of an application connected to it, while lowest priority work may go to a queue with a single instance connected to it that processes tasks in an order received. In at least one embodiment, an application may run on a GPU-accelerated instance generated in cloud 4726 , and an inference service may perform inferencing on a GPU. ) for performing inference processing ( Mallya et al. 2021/0142177 [0114 - inference processing] In at least one embodiment, activation storage 1020 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., Flash memory), or other storage. In at least one embodiment, activation storage 1020 may be completely or partially within or external to one or more processors or other logical circuits. In at least one embodiment, choice of whether activation storage 1020 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. In at least one embodiment, inference and/or training logic 1015 illustrated in FIG. 9 may be used in conjunction with an application-specific integrated circuit (“ASIC”), such as Tensorflow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic 1015 illustrated in FIG. 9 may be used in conjunction with central processing unit (“CPU”) hardware, graphics processing unit (“GPU”) hardware or other hardware, such as field programmable gate arrays (“FPGAs”). [0115 - inference processing ] FIG. 10 illustrates inference and/or training logic 1015 , according to at least one or more embodiments. In at least one embodiment, inference and/or training logic 1015 may include, without limitation, hardware logic in which computational resources are dedicated or otherwise exclusively used in conjunction with weight values or other information corresponding to one or more layers of neurons within a neural network. In at least one embodiment, inference and/or training logic 1015 illustrated in FIG. 10 may be used in conjunction with an application-specific integrated circuit (ASIC), such as Tensorflow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic 1015 illustrated in FIG. 10 may be used in conjunction with central processing unit (CPU) hardware, graphics processing unit (GPU) hardware or other hardware, such as field programmable gate arrays (FPGAs). In at least one embodiment, inference and/or training logic 1015 includes, without limitation, code and/or data storage 1001 and code and/or data storage 1005 , which may be used to store code (e.g., graph code), weight values and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyperparameter information. In at least one embodiment illustrated in FIG. 10, each of code and/or data storage 1001 and code and/or data storage 1005 is associated with a dedicated computational resource, such as computational hardware 1002 and computational hardware 1006 , respectively. In at least one embodiment, each of computational hardware 1002 and computational hardware 1006 comprises one or more ALUs that perform mathematical functions, such as linear algebraic functions, only on information stored in code and/or data storage 1001 and code and/or data storage 1005 , respectively, result of which is stored in activation storage 1020 . ) of the input data using the plurality of MLMs ( Mallya et al. 2021/0142177 [Fig. 9; 0012 - inference and/or training logic] FIG. 9 illustrates inference and/or training logic, according to at least one embodiment; [0013 - FIG. 10 illustrates inference and/or training logic] FIG. 10 illustrates inference and/or training logic, according to at least one embodiment; [0073 - Once a DNN is trained, this DNN can be deployed and used to identify and classify objects or patterns in a process known as inference. Examples of inference (a process through which a DNN extracts useful information from a given input)] A deep neural network (DNN) model includes multiple layers of many connected perceptrons (e.g., nodes) that can be trained with enormous amounts of input data to quickly solve complex problems with high accuracy. In one example, a first layer of a DNN model breaks down an input image of an automobile into various sections and looks for basic patterns such as lines and angles. Second layer assembles lines to look for higher-level patterns such as wheels, windshields, and mirrors. A next layer identifies a type of vehicle, and a final few layers generate a label for an input image, identifying a model of a specific automobile brand. Once a DNN is trained, this DNN can be deployed and used to identify and classify objects or patterns in a process known as inference. Examples of inference (a process through which a DNN extracts useful information from a given input) include identifying handwritten numbers on checks deposited into KIM machines, identifying images of friends in photos, delivering movie recommendations, identifying and classifying different types of automobiles, pedestrians, and road hazards in driverless cars, or translating human speech in near real-time. [0079 - input data to be processed using a neural network to obtain one or more inferences or other output values, classifications, or predictions. In at least one embodiment, input data can be received to interface layer 608 and directed to inference module] In at least one embodiment, at a subsequent point in time, a request may be received from client device 602 (or another such device) for content (e.g., path determinations) or data that is at least partially determined or impacted by a trained neural network. This request can include, for example, input data to be processed using a neural network to obtain one or more inferences or other output values, classifications, or predictions. In at least one embodiment, input data can be received to interface layer 608 and directed to inference module 618 , although a different system or service can be used as well. In at least one embodiment, inference module 618 can obtain an appropriate trained network, such as a trained deep neural network (DNN) as discussed herein, from model repository 616 if not already stored locally to inference module 618 . Inference module 618 can provide data as input to a trained network, which can then generate one or more inferences as output. This may include, for example, a classification of an instance of input data. In at least one embodiment, inferences can then be transmitted to client device 602 for display or other communication to a user. In at least one embodiment, context data for a user may also be stored to a user context data repository 622 , which may include data about a user which may be useful as input to a network in generating inferences, or determining data to return to a user after obtaining instances. In at least one embodiment, relevant data, which may include at least some of input or inference data, may also be stored to a local database 620 for processing future requests. In at least one embodiment, a user can use account or other information to access resources or functionality of a provider environment. In at least one embodiment, if permitted and available, user data may also be collected and used to further train models, in order to provide more accurate inferences for future requests. In at least one embodiment, requests may be received through a user interface to a machine learning application 626 executing on client device 602 , and results displayed through a same interface. A client device can include resources such as a processor 628 and memory 630 for generating a req