DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed June 26th, 2025 have been fully considered but they are not persuasive.
Regarding the rejections under 35 U.S.C. 101, Applicant notes that the two limitations “determining, from the request…” and “determining, based on the tenant identifier…” have been amended out of the claims; thus, according to Applicant, the removal of the limitations make the claims patent-eligible under Step 2A, prong 1. However, the amendments remain to recite an abstract idea: “executing, based on the a type of the machine learning application, a flow of operations to produce a result, wherein the flow of operations includes a combination of elements and an order in which the combination of elements are to be performed, wherein the flow of operations is tenant agnostic in that a first and second of the combination of elements respectively represent a first and second type of machine learning model, wherein executing the first and second of the combination of elements responsive to the request respectively includes:” The limitation directed to “executing…a flow of operations” was present in the claims previously examined. The refinement of the Step 2A, Prong 1 analysis in this action merely clarifies the reasoning underlying the same rejection and does not introduce a new ground of rejection. Accordingly, this Office Action is made final pursuant to MPEP 706.07(a).
Applicant asserts that “the limitation… is not ‘incidental to the primary process or product that are merely a nominal or tangential addition to the claim.’ Rather, this claim limitation refers to previously introduced claim limitations and provided antecedent basis for later claim limitations.” Applicant further contends that “the additional limitations are significant (i.e., impose meaningful limits on the claim)… and not just necessary data gathering and outputting (Remarks, pg. 4).
However, Applicant’s statements are conclusory. Merely asserting that the execution of two machine learning models is “not incidental” and “significant” does not demonstrate how this step applies or improves a particular technology in a meaningful way. The limitation still amounts to the generic execution of models on data using conventional computer processing, which is a routine and well-understood function. Applicant does not identify any technological improvement to machine learning, computer operation, or data handling beyond implementation of the abstract idea itself.
Applicant asserts that the limitation “executing, based on a type of the machine learning application, a flow of operations… wherein the flow includes a combination of elements and an order in which the combination… are to be performed” describes “a particular solution… where the claim as a whole requires a specific way of integrating machine learning models into a multi-tenant architecture.” Applicant further contends that the claim specifies relationships such as “tenant agnostic flow” and “tenant specific first/second operations (Remarks, pg. 9).”
This reasoning is unpersuasive. The claim’s purported “specific way” is still an abstraction directed to organizing and sequencing conventional data processing steps within a generic architecture. There is no identified technological improvement to the functioning of the computer, the machine learning models themselves, or the data processing environment. The argument recasts abstract functional language as “specific,” but the recited specificity merely describes logical flow, not a concrete technical mechanism. Under MPEP § 2106.05(f), such recitations are procedural detail, not meaningful technical constraint.
Applicant argues that the limitation “wherein the flow of operations is tenant agnostic in that a first and second of the combination of elements respectively represent a first and second type of machine learning model” is not a “field of use limitation.” Applicant further contends that the claim’s integration of tenant agnostic and tenant specific flows into a multi-tenant architecture shows specificity and technical application.
This argument is conclusory. The recited “tenant agnostic” and “tenant specific” flows merely define the business or organizational context in which the abstract idea (processing requests using machine learning models) is applied. The claim still operates through generic computer execution and data handling, with no improvement to the functioning of the computer or the underlying machine learning process. Referring to a “multi-tenant architecture” only limits the environment of use, which is precisely what §2106.05(h) identifies as a non-qualifying field-of-use restriction.
With regard to the Lee rejection, Applicant argues: “The above quotes from Lee do not describe a ‘multi-tenant machine learning serving infrastructure.’ Also, it is unclear to Applicant what in the above quote from Lee is alleged to be the claimed ‘tenant application,’ ‘request,’ and ‘machine learning application (Remarks, pg. 14).’”
However, the examiner’s mapping is supported by the cited portions of Lee. Paragraph [0079] explicitly discloses “a machine learning service (MLS) designed to support large numbers of users and a wide variety of algorithms”, which reasonably corresponds to the claimed “multi-tenant machine learning serving infrastructure.” Paragraph [0082] further describes “a request of a machine learning application” in context—where users or “tenants” submit input data, recipes, and model execution requests through online endpoints. These teachings correspond to the claimed “tenant application,” “request,” and “machine learning application.”
Applicant’s argument relies on a demand for explicit naming rather than functional equivalence. Lee clearly describes a system supporting multiple tenants submitting machine learning requests through a shared infrastructure, matching the claimed relationships even if terminology differs. Applicant argues that “Lee’s alias is not a ‘tenant identifier’” and that “Lee’s ‘alias’ identifies a version of a model, not a tenant.” Applicant cites Lee ¶0086 describing immutable names and pointers to model versions to claim that “one of ordinary skill in the art would not read ‘alias users’ as tenants (Remarks, pg. 15-16).”
This argument has limited strength. While Lee indeed describes an alias as identifying a model version, the rejection can still be sustained if Lee teaches associating model aliases with specific users, accounts, or organizations operating within the multi-tenant system. Lee ¶0079 and ¶0082 discuss “machine learning service designed to support large numbers of users” and “online access points for models published for use.” A person of ordinary skill could interpret the association between user/model alias and access scope as functionally identifying a tenant within the system, even if not explicitly labeled “tenant identifier.”
Applicant’s reasoning focuses on literal wording rather than on functional correspondence. The specification of Lee indicates that each user or tenant may access and deploy models under their own aliases, thereby performing the same role as the claimed tenant identifier—distinguishing which entity or tenant a model or request belongs to. The interpretation is thus not unreasonable.
Applicant argues: “There is no indication that the invoking of the different APIs or jobs are part of the same ‘flow of operations…Best Applicant can tell, Lee is describing separate submitted requests to separate APIs…There is also no indication that the alleged ‘flow of operations’ is ‘tenant agnostic…’Different parts of Lee are being combined in a manner not described by the cited sections (Remarks, pg. 18-20.” These points are unpersuasive. Lee ¶¶ 191–198 and related discussion depict coordination of multiple machine learning tasks and training iterations (e.g., J1, J2) across a distributed, multi-tenant service. Even if separate API calls occur, the reference explicitly presents them as coordinated components of a managed workflow executed by the service infrastructure, therefore collectively representing a flow of operations. Applicant’s distinction that each API call is “separate” does not negate that Lee describes the system orchestrating them as part of an overall pipeline.
Applicant’s claim that the examiner improperly “combined different parts of Lee” is also weak; referencing related descriptions within a single embodiment (e.g., training iterations and system management of data splits) constitutes a permissible holistic reading of the reference. Furthermore, Lee’s depiction of a system supporting multiple users and training configurations embodies a “tenant agnostic” design where operations depend on model type and configuration rather than on a specific tenant.
Applicant argues: “There is no indication that the execution of the M1 model and the M2 model from Lee are run with data related to ‘the request’…Lee is discussing asynchronously scheduled jobs… but these are not expressed as depending on each other or part of the same flow of operations based on the ‘type of machine learning application (Remarks, pg. 20-21).’” This argument is unpersuasive. Lee ¶¶191–198 describes a distributed machine learning system where multiple jobs (e.g., J1, J2) are generated, queued, and executed as part of a managed training and evaluation process under a shared infrastructure (the MLS). Although each job may be scheduled independently, Lee teaches that jobs correspond to iterations of model training and evaluation associated with different models or configurations—operations that collectively define a flow of machine learning processes governed by the type of task or model involved.
Applicant’s insistence that the reference must explicitly tie both jobs to the same “request” misapplies the standard for rejection. Nothing in the claim requires the jobs to be synchronously linked; it merely requires executing multiple models responsive to the request. Lee’s disclosure that the system schedules and executes multiple training jobs in response to model configurations inherently meets this limitation. A skilled artisan would readily understand that such job execution is initiated by service-level requests to the MLS, satisfying the claimed relationship.
Applicant’s argument rests on an overly narrow reading that isolates examples and ignores Lee’s broader system-level disclosure of orchestrated training flows. The reference reasonably teaches execution of multiple models responsive to service requests and corresponding to different types of machine learning applications. Finally, Applicant’s terminology (“tenant agnostic flow,” “tenant specific models”) merely restates this known division of responsibilities and does not identify any technical or operational distinction.
Applicant asserts: “The claim “requires a specific way of integrating machine learning models into a multi-tenant architecture” that includes “tenant agnostic flow” and “tenant specific models…This specific way is not taught by Lee (Remarks, pg. 22-23).” This argument lacks evidentiary support. Lee ¶¶79–86, 191–198 clearly disclose a machine learning service supporting model training, evaluation, and execution for multiple tenants, even if the term “tenant” is not explicitly used.
Applicant’s repeated emphasis on a “specific way” is conclusory; no concrete algorithmic, architectural, or procedural differences are articulated beyond what Lee already performs. Describing routine orchestration of models per tenant under a shared multi-tenant service does not confer novelty.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Regarding claim 1 and analogous claims 8 and 15:
Step 1: is the claim directed to one of the four statutory categories?
Claim 1 is directed to a method, claim 8 is directed to an article of manufacture, claim 15 is directed to an apparatus.
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes. The limitations: “executing, based on the type of the machine learning application, a flow of operations to produce a result, wherein the flow of operations includes a combination of elements and an order in which the combination of elements are to be performed, wherein the flow of operations is tenant agnostic in that a first and second of the combination of elements respectively represent a first and second type of machine learning model, wherein executing the first and second of the combination of elements responsive to the request” is directed to mental processes of judgment implemented on a generic computer under MPEP 2106.04(a)(2)(III).
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitations: “running, for the first and second type of machine learning model, a first and second machine learning models with data related to the request, wherein the first and second machine learning models are tenant specific machine learning models that were selected responsive to the request and based on the type of the machine learning application and a tenant identifier that was determined from the request and that identifies one of the plurality of tenants, wherein the first machine learning model was generated based on a first training data set associated with the tenant identifier and the second machine learning model was generated based on a second training data set associated with the tenant identifier; and returning the result in response to the request” are directed to insignificant extra-solution activities of mere data gathering and outputting under MPEP 2106.05(g).
Further, the limitation: “responsive to the request, the multi-tenant machine learning serving infrastructure performing the following,” and “executing, based on the type of the machine learning application, a flow of operations to produce a result, wherein the flow of operations includes a combination of elements and an order in which the combination of elements are to be performed,” is directed to mere instructions to apply under MPEP 2106.05(f) (Intellectual Ventures v. Erie Indem).
Further, the limitation: “wherein the first and second machine learning model are respectively of the first and second type of machine learning model,” and “wherein the flow of operations is tenant agnostic in that a first and second of the combination of elements respectively represent a first and second type of machine learning model,” is directed to field of use under MPEP 2106.05(h). None of these limitations taken either alone or in combination, amount to a practical application.
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitations: “receiving, at a multi-tenant machine learning serving infrastructure that serves a plurality of tenants, from a tenant application a request of a machine learning application;” and “wherein executing the first and second of the combination of elements responsive to the request includes running the first and second machine learning models with data related to the request;” and “and returning the scoring result in response to the request” are directed to well-understood, routine, and conventional activity of “receiving or transmitting data over a network” under MPEP 2106.05(d) (Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information);).
Further, the limitation: “executing, based on the type of the machine learning application, a flow of operations to produce a result, wherein the flow of operations includes a combination of elements and an order in which the combination of elements are to be performed,” is directed to mere instructions to apply under MPEP 2106.05(f) (Intellectual Ventures v. Erie Indem).
Further, the limitation: “wherein the flow of operations is tenant agnostic in that a first and second of the combination of elements respectively represent a first and second type of machine learning model,” is directed to field of use under MPEP 2106.05(h). None of these limitations, taken either alone or in combination, amount to significantly more than the abstract idea.
Claim 8 recites, “if executed by a set of one or more processors of a machine learning serving infrastructure that serves a plurality of tenants, are configurable to cause said set of one or more processors to perform operations comprising,” which is directed to mere instructions to apply an exception under MPEP 2106.05(f) (Intellectual Ventures v. Erie Indem).
Claim 15 recites, “and a non-transitory machine-readable storage medium that provides instructions that, if executed by the set of one or more processors, are configurable to cause the apparatus to perform operations comprising,” which is directed to mere instructions to apply an exception under MPEP 2106.05(f) (Intellectual Ventures v. Erie Indem).
Regarding claim 2 and analogous claims 9 and 16:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitations: “based on the type of the machine learning application, the flow of operations includes: transmitting a first request to a first scoring service to run the first machine learning model with data related to the request to obtain a first scoring result; receiving the first scoring result from the first scoring service; and transmitting a second request to a second scoring service to run the second machine learning model with at least the first scoring result to obtain the result” are directed insignificant extra-solution activities to mere data gathering and outputting under MPEP 2106.05(g). None of these limitations taken either alone or in combination, amount to a practical application.
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitations: “based on the type of the machine learning application, the flow of operations includes: transmitting a first request to a first scoring service to run the first machine learning model with data related to the request to obtain a first scoring result; receiving the first scoring result from the first scoring service; and transmitting a second request to a second scoring service to run the second machine learning model with at least the first scoring result to obtain the result” are directed to well-understood, routine, and conventional activity of “receiving or transmitting data over a network” under MPEP 2106.05(d) (Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information);). None of these limitations, taken either alone or in combination, amount to significantly more than the abstract idea.
Regarding claim 3 and analogous claims 10 and 17:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitations: “wherein the running includes: transmitting a first request to a first scoring service to run the first machine learning model with first data related to the request to obtain a first scoring result; transmitting a second request to a second scoring service to run the second machine learning model with second data related to the request to obtain a second scoring result; receiving the first and the second scoring results from the first and second scoring services respectively; and combining the first and the second scoring results to obtain the result” are directed to extra-solutional activity of mere data gathering and outputting under MPEP 2106.05(g). None of these limitations taken either alone or in combination, amount to a practical application.
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitations: “wherein the running includes: transmitting a first request to a first scoring service to run the first machine learning model with first data related to the request to obtain a first scoring result; transmitting a second request to a second scoring service to run the second machine learning model with second data related to the request to obtain a second scoring result; receiving the first and the second scoring results from the first and second scoring services respectively; and combining the first and the second scoring results to obtain the result” are directed to well-understood, routine, and conventional activity of “receiving or transmitting data over a network” under MPEP 2106.05(d). None of these limitations, taken either alone or in combination, amount to significantly more than the abstract idea.
Regarding claim 4 and analogous claims 11 and 18:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “wherein the combining the first and the second scoring results includes aggregating the first and second scoring results” is directed to extra-solutional activity of mere data gathering under MPEP 2106.05(g). None of these limitations taken either alone or in combination, amount to a practical application.
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “wherein the combining the first and the second scoring results includes aggregating the first and second scoring results” is directed to well-understood, routine, and conventional activity of “receiving or transmitting data over a network” under MPEP 2106.05(d) (Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information);. None of these limitations, taken either alone or in combination, amount to significantly more than the abstract idea.
Regarding claim 5 and analogous claims 12 and 19:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1. Further, the limitation: “wherein prior to running the first machine learning model performing responsive to determining that the first machine learning model is not deployed;” is directed to a mental process of inference under MPEP 2106.04(a)(2)(III).
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “retrieving the first machine learning model from a machine learning model datastore based on an identifier of the first machine learning model” is directed to extra-solutional activity of mere data gathering under MPEP 2106.05(g). None of these limitations taken either alone or in combination, amount to a practical application.
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “retrieving the first machine learning model from a machine learning model datastore based on an identifier of the first machine learning model” is directed to well-understood, routine, and conventional activity of “receiving or transmitting data over a network” under MPEP 2106.05(d) (Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). None of these limitations, taken either alone or in combination, amount to significantly more than the abstract idea.
Regarding claim 6 and analogous claims 13 and 20:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “wherein the tenant application is a customer relationship management (CRM) application and the data related to the request includes one or more fields of a record that is identified in the request” is directed to field of use under MPEP 2106.05(h). None of these limitations taken either alone or in combination, amount to a practical application.
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “wherein the tenant application is a customer relationship management (CRM) application and the data related to the request includes one or more fields of a record that is identified in the request” is directed to field of use under MPEP 2106.05(h). None of these limitations, taken either alone or in combination, amount to significantly more than the abstract idea.
Regarding claim 7 and analogous claims 14 and 21:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “wherein the result includes predicted information for the record according to the first and second machine learning models” is directed to field of use under MPEP 2106.05(h). None of these limitations taken either alone or in combination, amount to a practical application.
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “wherein the result includes predicted information for the record according to the first and second machine learning models” is directed to field of use under MPEP 2106.05(h). None of these limitations, taken either alone or in combination, amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 8-13, 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over
Regarding claim 1 and analogous claims 8 and 15:
Lee and Jadon teach:
1. A method comprising: receiving, at a multi-tenant machine learning serving infrastructure that serves a plurality of tenants, from a tenant application a request of a machine learning application;
(Lee, ¶0079)
“Various embodiments of methods and apparatus for a customizable, easy-to-use machine learning service (MLS) designed to support large numbers of users and a wide variety of algorithms and problem sizes are described [i.e. A method comprising: receiving, at a multi-tenant machine learning serving infrastructure that serves a plurality of tenants,].”
(Lee, ¶0082)
“Supported entity types in one embodiment may include, among others, data sources (e.g., descriptors of locations or objects from which input records for machine learning can be obtained), sets of statistics generated by analyzing the input data, recipes (e.g., descriptors of feature processing transformations to be applied to input data for training models), processing plans (e.g., templates for executing various machine learning tasks), models (which may also be referred to as predictors), parameter sets to be used for recipes and/or models, model execution results such as predictions or evaluations, online access points for models that are to be used on streaming or real-time data, and/or aliases (e.g., pointers to model versions that have been “published” for use as described below) [i.e. from a tenant application a request of a machine learning application;].”
2. responsive to the request, the multi-tenant machine learning serving infrastructure performing the following: executing, based on the type of the machine learning application, a flow of operations to produce a result, wherein the flow of operations includes a combination of elements and an order in which the combination of elements are to be performed,
(Lee, ¶0106, Fig. 5)
“FIG. 5 illustrates an example of asynchronous scheduling of jobs at a machine learning service, according to at least some embodiments [i.e. executing, based on the type of the machine learning application, a flow of operations to produce a result,]. In the depicted example, a client has invoked four MLS APIs, API1 through API4, and four corresponding job objects J1 through J4 are created and placed in job queue 142 [i.e. wherein the flow of operations includes a combination of elements and an order in which the combination of elements are to be performed,].”
4. wherein the flow of operations is tenant agnostic in that a first and second of the combination of elements respectively represent a first and second type of machine learning model,
(Lee, ¶0198)
“At time t1, a training job J1 of a training-and-evaluation iteration TEI1 for a model M1 is begun…At time t2, a training job J2 may be scheduled at a server set SS2, for a training-and-evaluation iteration TEI2 for a different model M2.”
5. wherein executing the first and second of the combination of elements responsive to the request respectively includes running the first and second machine learning models with data related to the request;
(Lee, ¶0198)
“At time t1, a training job J1 of a training-and-evaluation iteration TEI1 for a model M1 is begun. Job J1 is scheduled at a set of servers SS1 of the MLS, and may include the selection of a training set, e.g., either at the chunk-level, at the observation record level, or at both levels. A pseudo-random number source PRNS 3002 (such as a function or method that returns a sequence of PRNs, or a list of pre-generated PRNs) may be used to generate the training set for Job J1. At time t2, a training job J2 may be scheduled at a server set SS2, for a training-and-evaluation iteration TEI2 for a different model M2 [i.e. wherein executing the first and second of the combination of elements responsive to the request respectively includes running the first and second machine learning models with data related to the request;].”
6. and returning the result in response to the request.
(Lee, ¶0123)
“For local mode, the MLS may package up an executable local version 843 of the model (where the details of the type of executable that is to be provided, such as the type of byte code or the hardware architecture on which the model is to be run, may have been specified in the execution request 812) and transmit the local model to the client [i.e. and returning the result in response to the request].”
Lee does not explicitly teach:
1. running, for the first and second type of machine learning model, a first and second machine learning models with data related to the request, wherein the first and second machine learning model are respectively of the first and second type of machine learning model,
2. wherein the first and second machine learning models are tenant specific machine learning models that were selected responsive to the request and based on the type of the machine learning application and a tenant identifier that was determined from the request and that identifies one of the plurality of tenants,
3. wherein the first machine learning model was generated based on a first training data set associated with the tenant identifier and the second machine learning model was generated based on a second training data set associated with the tenant identifier;
Jadon teaches:
1. running, for the first and second type of machine learning model, a first and second machine learning models with data related to the request, wherein the first and second machine learning model are respectively of the first and second type of machine learning model,
(Jadon, col. 7: 32-36)
“In accordance with one or more techniques of this disclosure, in response to a request for a prediction, ML system 138 may train each respective ML model in a predetermined plurality of ML models to generate, a respective training-phase prediction in a plurality of training-phase predictions [i.e. running, for the first and second type of machine learning model, a first and second machine learning models with data related to the request, wherein the first and second machine learning model are respectively of the first and second type of machine learning model,].”
2. wherein the first and second machine learning models are tenant specific machine learning models that were selected responsive to the request and based on the type of the machine learning application and a tenant identifier that was determined from the request and that identifies one of the plurality of tenants,
(Jadon, col. 18: 66-67, col. 19: 1-3)
“Policy controller 140 may cause a user interface 262 containing data based on the prediction to be presented at user interface device 129. For example, API 146 may receive the prediction from ML system 138 and output the prediction to user interface module 254 [i.e. wherein the first and second machine learning models are tenant specific machine learning models that were selected responsive to the request].”
(Jadon, col. 18: 51-55)
“Similarly, when ML system 138 processes a deep learning model request, ML system 138 may return a prediction generated by a deep learning ML model selected by ML system 138 from a plurality of predetermined types of deep learning ML models [i.e. and based on the type of the machine learning application].”
(Jadon, col. 19: 1-5)
“For example, user interface module 254 may generate a JavaScript Object Notation (JSON) object that contains data sufficient to create at least part of user interface 262. User interface module 254 causes communication unit 245 to output a signal over network 205 or another network. User interface device 129 detects the signal and processes the signal to generate user interface 262 [i.e. and a tenant identifier that was determined from the request and that identifies one of the plurality of tenants].”
3. wherein the first machine learning model was generated based on a first training data set associated with the tenant identifier and the second machine learning model was generated based on a second training data set associated with the tenant identifier;
(Jadon, col. 9: 33-37)
“Based on the request, ML system 138 may train each respective ML model in a plurality of ML models to generate (e.g., based on data stored in data store 145 or provided training data), a respective training-phase prediction in a plurality of training-phase predictions.”
One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Lee with Jadon. The motivation is to improve the multi-user system of Lee by incorporating Jadon’s user-specific training and selection logic to ensure that each request is processed by a model runed to the requesting tenant’s data, thereby making it “easier for an administrator to select an appropriate ML model to generate a prediction (Jadon, col. 3:66-67; col. 4:1)”
Regarding claim 2 and analogous claims 8 and 15:
Lee and Jadon teach:
1. wherein the running includes: transmitting a first request to a first scoring service to run the first machine learning model with data related to the request to obtain a first result;
(Lee, ¶0106, Fig. 5)
“In the depicted example, a client has invoked four MLS APIs, API1 through API4, and four corresponding job objects J1 through J4 are created and placed in job queue 142 [i.e. wherein the executing, based on the type of the machine learning application, the flow of operations includes:].”
One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Lee with Jadon. The motivation is the same as claim 1.
(Lee, ¶0107, Fig. 5)
“Full dependency is indicated in FIG. 5 by the parameter “dependsOnComplete” shown in the job objects—e.g., J2 is dependent on J1 completing execution, and J4 depends on J2 completing successfully [i.e. : transmitting a first request to a first scoring service to run the first machine learning model with data related to the request to obtain a first result;].”
2. receiving the first scoring result from the first scoring service;
(Lee, ¶0108)
“For example, in one implementation, in response to API1, the client may be provided with a job identifier for J1 [i.e. the first scoring service;], and that job identifier may be included as a parameter in API2 to indicate that the results of API1 are required to perform the operations corresponding to API2 [i.e. receiving the first scoring result].”
Examiner notes that the score is completion of the task under BRI, referring to “to keep a record or account of by or as if notches on a tally.”
3. and transmitting a second request to a second scoring service to run the second machine learning model with at least the first scoring result to obtain the result.
(Lee, ¶0109)
“In the depicted embodiment, when J1 completes, (a) the client is notified and (b) J2 is scheduled for execution. As indicated by J2's dependsOnComplete parameter value, J2 depends on J1' s completion, and J2's execution could therefore not have been begun until t3, even if J2's processing plan were ready and J2's resource set had been available prior to t3 [i.e. and transmitting a second request to a second scoring service to run the second machine learning model with at least the first scoring result to obtain the result].”
Examiner notes that the score is completion of the task under BRI, referring to “to keep a record or account of by or as if notches on a tally.”
One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Lee with Jadon. The motivation is the same as claim 1.
Regarding claim 3 and analogous claims 9 and 15:
Lee and Jadon teach:
1. transmitting a first request to a first scoring service to run the first machine learning model with first data related to the request to obtain a first scoring result;
(Lee, ¶0106, Fig. 5)
“In the depicted example, a client has invoked four MLS APIs, API1 through API4, and four corresponding job objects J1 through J4 are created and placed in job queue 142 [i.e. transmitting a first request to a first scoring service to run the first machine learning model with first data related to the request to obtain a first scoring result;].”
Examiner notes that the score is completion of the task under BRI, referring to “to keep a record or account of by or as if notches on a tally.”
2. transmitting a second request to a second scoring service to run the second machine learning model with second data related to the request to obtain a second scoring result;
(Lee, ¶0106, Fig. 5)
“In the depicted example, a client has invoked four MLS APIs, API1 through API4, and four corresponding job objects J1 through J4 are created and placed in job queue 142.”
Examiner notes that the score is completion of the task under BRI, referring to “to keep a record or account of by or as if notches on a tally.”
3. receiving the first and the second scoring results from the first and second scoring services respectively;
(Lee, ¶0107)
“Full dependency is indicated in FIG. 5 by the parameter “dependsOnComplete” shown in the job objects—e.g., J2 is dependent on J1 completing execution, and J4 depends on J2 completing successfully. In the other type of dependency, the execution of one job Jp may be started as soon as some specified phase of another job Jq is completed [i.e. receiving the first and the second scoring results from the first and second scoring services respectively;].”
Examiner notes that the score is completion of the task under BRI, referring to “to keep a record or account of by or as if notches on a tally.”
4. and combining the first and the second scoring results to obtain the scoring result.
(Lee, ¶0108)
“For example, in one implementation, in response to API1, the client may be provided with a job identifier for J1, and that job identifier may be included as a parameter in API2 to indicate that the results of API1 are required to perform the operations corresponding to API2 [i.e. and combining the first and the second scoring results to obtain the scoring result].”
Examiner notes that the score is completion of the task under BRI, referring to “to keep a record or account of by or as if notches on a tally.”
One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Lee with Jadon. The motivation is the same as claim 1.
Regarding claim 4 and analogous claims 11 and 18:
Lee and Jadon teach:
1. wherein the combining the first and the second scoring results includes aggregating the first and second scoring results.
(Lee, ¶0108)
“For example, in one implementation, in response to API1, the client may be provided with a job identifier for J1, and that job identifier may be included as a parameter in API2 to indicate that the results of API1 are required to perform the operations corresponding to API2 [i.e. wherein the combining the first and the second scoring results includes aggregating the first and second scoring results].”
Examiner notes that the score is completion of the task under BRI, referring to “to keep a record or account of by or as if notches on a tally.”
One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Lee with Jadon. The motivation is the same as claim 1.
Regarding claim 5 and analogous claims 12 and 19:
Lee and Jadon teach:
1. wherein prior to running the first machine learning model with first data related to the request performing: determining that the first machine learning model is not deployed;
(Lee, ¶0116)
“In some implementations a distinction may be drawn between aliases that are currently in production mode and those that are in internal-use or test mode, and the MLS may ensure that the underlying model is not deleted or un-mounted for an alias in production mode [i.e. wherein prior to running the first machine learning model with first data related to the request performing: determining that the first machine learning model is not deployed;].”
2. and responsive to determining that the first machine learning model is not deployed, retrieving the first machine learning model from a machine learning model datastore based on an identifier of the first machine learning model.
(Lee, ¶0116)
“After model developers 676 improve the accuracy and/or performance characteristics of a newer version of a model 630 relative to an older version for which an alias 640 has been created, they may switch the pointer of the alias [i.e. based on an identifier of the first machine learning model] so that it now points to the improved version [i.e. and responsive to determining that the first machine learning model is not deployed, retrieving the first machine learning model from a machine learning model datastore].”
One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Lee with Jadon. The motivation is the same as claim 1.
Claims 6-7, 13-14, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over US Pre-Grant Patent 2020/0050968 (Lee et al; Lee) in view of US Patent 11,501,190 (Jadon et al; Jadon) in view of US Pre-Grant Patent 2019/0236191 (Petschulat et al; Petschulat).
Regarding claim 6 and analogous claims 13 and 20:
Neither Lee nor Jadon does explicitly teaches:
1. wherein the tenant application is a customer relationship management (CRM) application and the data related to the request includes one or more fields of a record that is identified in the request.
Petschulat teaches:
1. wherein the tenant application is a customer relationship management (CRM) application and the data related to the request includes one or more fields of a record that is identified in the request.
(Petschulat, ¶0014)
“As an example, one tenant might be a company that employs a sales force where each salesperson uses a client device 110 to manage their sales process [i.e. wherein the tenant application. Thus, a user might maintain contact data, leads data, customer follow-up data, performance data, goals and progress data, etc., all applicable to that user's personal sales process [i.e. a record that is identified in the request].”
(Petschulat, ¶0015)
“In one embodiment, online system 100 implements a web-based customer relationship management (CRM) system [i.e. is a customer relationship management (CRM) application].”
(Petschulat, ¶0021)
“The request processing module 130 stores the receive data in one or more data tables. A data table includes one or more data categories that are logically arranged as columns or fields. Each row or record of a data table includes an instance of data for each category defined by the fields [i.e. and the data related to the request includes one or more fields].”
One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Lee and Jadon with Petschulat. The motivation is to apply well-known machine learning concepts in Lee to a CRM application, as “Online systems often store large amount of data for enterprises. An online system may store data for a single enterprise or for multiple enterprises. For example, a multi-tenant system stores data for multiple tenants, each tenant potentially representing an enterprise. The data stored by an online system for an enterprise is typically high-dimensional data… The high-dimensionality of the data poses a unique challenge for online systems in managing and preparing data for users (Petschulat, ¶0002-¶0003). “
Regarding claim 7 and analogous claims 14 and 21:
Lee teaches
1. [wherein the scoring result includes predicted information for the record] according to the first and second machine learning models.
(Lee, ¶0086)
“In some embodiments, some machine learning models may be created and trained, e.g., by a group of model developers or data scientists using the MLS APIs, and then published for use by another community of users.”
Lee and Jadon do not explicitly teach:
1. wherein the scoring result includes predicted information for the record [according to the first and second machine learning models].
Petschulat teaches:
1. wherein the scoring result includes predicted information for the record according to the first and second machine learning models.
(Petschulat, ¶0041)
“For example, a first feature represents that dimension A is accessed more frequently by a user than dimension B, a second feature represents that dimension B is accessed by scripts of higher priority than those accessing dimension A [i.e. wherein the scoring result], and the second feature is historically is more important to determining a column's relevance than the first feature, then the second feature is associated with a higher weight than the first feature [i.e. includes predicted information for the record].”
One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Lee with Petschulat. The motivation is the same as claim 6.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL JUSTIN BREENE whose telephone number is (571)272-6320. Examiner
interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-
based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO
Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached on 303-297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more informati