Prosecution Insights
Last updated: April 19, 2026
Application No. 18/216,042

ARTIFICIAL-INTELLIGENCE MODELING UTILITY SYSTEM

Non-Final OA §101§103
Filed
Jun 29, 2023
Examiner
RODEN, DONALD THOMAS
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 2 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
27
Total Applications
across all art units

Statute-Specific Performance

§101
36.5%
-3.5% vs TC avg
§103
44.1%
+4.1% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is made non-final. This action is in response to the claims filed on June 29, 2023. Claims 1-20 are pending in the case and have been examined, claims 1-20 are rejected. Claim Objections Claim 1 is objected to because of the following informalities: The claim recites “an experience module” , but later just recites "experience" without module. To maintain consistency, the claim should recite "experience module". Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. To determine if a claim is directed to patent ineligible subject matter, the Court has guided the Office to apply the Alice/Mayo test, which requires: Step 1: Determining if the claim falls within a statutory category. Step 2A: Determining if the claim is directed to a patent ineligible judicial exception consisting of a law of nature, a natural phenomenon, or abstract idea; and Step 2A is a two prong inquiry. MPEP 2106.04(II)(A). Under the first prong, examiners evaluate whether a law of nature, natural phenomenon, or abstract idea is set forth or described in the claim. Abstract ideas include mathematical concepts, certain methods of organizing human activity, and mental processes. MPEP 2104.04(a)(2). The second prong is an inquiry into whether the claim integrates a judicial exception into a practical application. MPEP 2106.04(d). Step 2B: If the claim is directed to a judicial exception, determining if the claim recites limitations or elements that amount to significantly more than the judicial exception. (See MPEP 2106). Claims 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-10 are directed to a method (a process), Claims 11-15 are directed to a system comprising one or more processors (a machine), and Claims 16-20 are directed to a non-transitory machine-readable storage medium (a manufacture). Therefore, Claims 1-20 are directed to a process, machine or manufacture or composition of matter. Regarding claim 1 Step 2A Prong 1 Claim 1 recites the following mental processes, that in each case under the broadest reasonable interpretation, covers performance of the limitation in the mind (including observation, evaluation, judgement, opinion) or with the aid of pencil and paper but for recitation of generic computer components (e.g., “modeling manager”, “experience module”, “machine-learning models”, and “user interface”) [see MPEP 2106.04(a)(2)(III)]. “configuring … for the experiment based on the parameter values entered on the first UI” (e.g., a human can enter values into a spreadsheet) “selecting, … one of the configured ML models for providing a response to the request” (e.g., a human can select a completed process based on desired results) Accordingly, at Step 2A, prong one, the claim recites an abstract idea. Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of “modeling manager”, “experience module”, “machine-learning models”, and “user interface” which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). The Examiner notes that this is used throughout the claim limitations, and is rejected thusly for each claim which recites the same language. Regarding the “receiving, by a modeling manager, a schema from an experience module, the experience module implementing one or more features of an online service, the schema being a data structure that defines variables for an experiment, the modeling manager managing a plurality of machine-learning (ML) models” this additional element is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of inputting data for use in the claimed process (see MPEP 2106.05(g)). The examiner notes that “the experience module implementing one or more features of an online service”, “the schema being a data structure that defines variables for an experiment”, and “the modeling manager managing a plurality of machine-learning (ML) models” is merely defining where the data is derived from, defining the data structure that is being received for the process, and stating there are multiple machine learning models which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). Regarding the “providing, by the modeling manager, a first user interface (UI) based on the schema for entering parameter values for the experiment” limitation, which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). The Examiner notes that this is used to collect data for the machine learning process, and could be interpreted as data gathering (MPEP 2106.05(g)). Regarding the “initializing the experiment” limitation, which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). Regarding the “during the experiment, receiving, by the modeling manager, a request from the experience module for data associated with the experiment”, and “getting the response from the selected ML model based on input provided to the ML model based on the request”, these additional elements are recited at a high level of generality and amount to extra-solution activity of inquiring data during a process and outputting data, i.e. post-solution activity of data gathering for use in the claimed process (see MPEP 2106.05(g)). Regarding the “getting the response from the selected ML model based on input provided to the ML model based on the request” limitation, this additional element is recited at a high level of generality and amounts to extra-solution activity of transmitting machine learning results, i.e. post-solution activity of data outputting (see MPEP 2106.05(g)). Regarding the “sending, by the modeling manager, the response to the experience”, and “providing a second UI for presenting results of the experiment” limitations, which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “modeling manager”, “experience module”, “machine-learning models”, and “user interface” which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Regarding the “receiving, by a modeling manager, a schema from an experience module, the experience module implementing one or more features of an online service, the schema being a data structure that defines variables for an experiment, the modeling manager managing a plurality of machine-learning (ML) models” this additional element is recited at a high level of generality and amounts to extra-solution pre-solution activity of inputting data. Regarding the “during the experiment, receiving, by the modeling manager, a request from the experience module for data associated with the experiment”, and “getting the response from the selected ML model based on input provided to the ML model based on the request”, these additional elements are recited at a high level of generality and amount to extra-solution post-solution activity of data gathering. Regarding the “getting the response from the selected ML model based on input provided to the ML model based on the request” this additional element is recited at a high level of generality and amounts to extra-solution post-solution activity of data outputting. The courts have found limitations directed to obtaining information electronically, recited at a high-level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). Regarding the “providing, by the modeling manager, a first user interface (UI) based on the schema for entering parameter values for the experiment” limitation, which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). Regarding the “initializing the experiment” limitation, which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). Regarding the “sending, by the modeling manager, the response to the experience”, and “providing a second UI for presenting results of the experiment” limitations, which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claim 2 Step 2A Prong 1 Claim 2 does not recite an abstract idea, but is directed to the abstract idea identified in its parents claim(s). Accordingly, at Step 2A, prong one, the claim recites an abstract idea. Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of “wherein the modeling manager comprises a common configuration and common infrastructures for managing the plurality of ML models” which is recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “wherein the modeling manager comprises a common configuration and common infrastructures for managing the plurality of ML models” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claim 3 Step 2A Prong 1 Claim 3 does not recite an abstract idea, but is directed to the abstract idea identified in its parents claim(s). Accordingly, at Step 2A, prong one, the claim recites an abstract idea. Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of “assigning a percentage of requests served by each of the models during the experiment” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “assigning a percentage of requests served by each of the models during the experiment” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).In particular it is merely describing how the data is labeled for use in the claimed process. Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claim 4 Step 2A Prong 1 Claim 4 does not recite an abstract idea, but is directed to the abstract idea identified in its parents claim(s). Accordingly, at Step 2A, prong one, the claim recites an abstract idea. Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of “wherein the experiment is for defining text for a notification to be sent to a user, wherein each of the configured ML models provides the text for the notification based on user identification (ID) and segment ID” which is recited at a high-level of generality such that they amount to no more than generally linking the use of abstract idea to a particular technological environment or field of use using a generic computer component (See MPEP 2106.05(h)). In particular it is merely describing how the data is labeled for use in the claimed process. Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “wherein the experiment is for defining text for a notification to be sent to a user, wherein each of the configured ML models provides the text for the notification based on user identification (ID) and segment ID” which is recited at a high-level of generality such that they amount to no more than generally linking the use of abstract idea to a particular technological environment or field of use using a generic computer component (See MPEP 2106.05(h)). In particular it is merely describing how the data is labeled for use in the claimed process. Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claim 5 Step 2A Prong 1 Claim 4 does not recite an abstract idea, but is directed to the abstract idea identified in its parents claim(s). Accordingly, at Step 2A, prong one, the claim recites an abstract idea. Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of “wherein the experiment is for providing multiple options for text on a webpage, wherein the schema defines a control value and one or more variants as the multiple options for the text on the webpage” which is recited at a high-level of generality such that they amount to no more than generally linking the use of abstract idea to a particular technological environment or field of use using a generic computer component (See MPEP 2106.05(h)). In particular it is merely describing how the data is labeled for use in the claimed process. Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “wherein the experiment is for providing multiple options for text on a webpage, wherein the schema defines a control value and one or more variants as the multiple options for the text on the webpage” which is recited at a high-level of generality such that they amount to no more than generally linking the use of abstract idea to a particular technological environment or field of use using a generic computer component (See MPEP 2106.05(h)). In particular it is merely describing how the data is labeled for use in the claimed process. Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claim 6 Step 2A Prong 1 Claim 6 does not recite an abstract idea, but is directed to the abstract idea identified in its parents claim(s). Accordingly, at Step 2A, prong one, the claim recites an abstract idea. Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of “wherein the first UI is provided as a browser extension that provides a toolbar presented with a user feed webpage” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “wherein the first UI is provided as a browser extension that provides a toolbar presented with a user feed webpage” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claim 7 Step 2A Prong 1 Claim 7 does not recite an abstract idea, but is directed to the abstract idea identified in its parents claim(s). Accordingly, at Step 2A, prong one, the claim recites an abstract idea. Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of “notifying an experiment tracking system of a configuration for the experiment, wherein the second UI is provided by the experiment tracking system” which is recited at a high-level of generality such that it amounts to extra-solution activity of reporting/logging/sending data to a system and presenting results, i.e. post-solution activity of data outputting for use in the claimed process (see MPEP 2106.05(g)). Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “notifying an experiment tracking system of a configuration for the experiment, wherein the second UI is provided by the experiment tracking system” limitation, the additional element is recited at a high-level of generality and amounts to extra-solution activity of post-solution activity of data outputting. The courts have found limitations directed to obtaining information electronically, recited at a high-level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claim 8 Step 2A Prong 1 Claim 8 does not recite an abstract idea, but is directed to the abstract idea identified in its parents claim(s). Accordingly, at Step 2A, prong one, the claim recites an abstract idea. Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of “wherein the request comprises a user identifier (ID) of a user associated with a communication being sent to the user” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “wherein the request comprises a user identifier (ID) of a user associated with a communication being sent to the user” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claim 9 Step 2A Prong 1 Claim 9 does not recite an abstract idea, but is directed to the abstract idea identified in its parents claim(s). Accordingly, at Step 2A, prong one, the claim recites an abstract idea. Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of “wherein the request comprises a segment (ID) for a segment of users” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “wherein the request comprises a segment (ID) for a segment of users” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claim 10 Step 2A Prong 1 Claim 10 does not recite an abstract idea, but is directed to the abstract idea identified in its parents claim(s). Accordingly, at Step 2A, prong one, the claim recites an abstract idea. Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of “wherein the modeling manager manages training of the plurality of ML models, wherein the plurality of ML models is available to a plurality of experience modules” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “wherein the modeling manager manages training of the plurality of ML models, wherein the plurality of ML models is available to a plurality of experience modules” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claims 11-15, and 16-20 Claims 11-15, and 16-20 recites a system comprising one or more processors, and a non-transitory machine-readable storage medium , respectively. The addition of generic computer components executing instructions are insufficient to render the claims subject matter eligible for the same reasons as described above. Specifically: Claim 11 corresponds to claim 1, with the added recitation of generic computer components to execute instructions to perform the same abstract method steps of claim 1. Claim 12 corresponds to claim 2, with the added recitation of generic computer components to execute instructions to perform the same abstract method steps of claim 2. Claim 13 corresponds to claim 3, with the added recitation of generic computer components to execute instructions to perform the same abstract method steps of claim 3. Claim 14 corresponds to claim 4, with the added recitation of generic computer components to execute instructions to perform the same abstract method steps of claim 4. Claim 15 corresponds to claim 5, with the added recitation of generic computer components to execute instructions to perform the same abstract method steps of claim 5. Claim 16 corresponds to claim 1, with the added recitation of generic computer components to execute instructions to perform the same abstract method steps of claim 1. Claim 17 corresponds to claim 2, with the added recitation of generic computer components to execute instructions to perform the same abstract method steps of claim 2. Claim 18 corresponds to claim 3, with the added recitation of generic computer components to execute instructions to perform the same abstract method steps of claim 3. Claim 19 corresponds to claim 4, with the added recitation of generic computer components to execute instructions to perform the same abstract method steps of claim 4. Claim 20 corresponds to claim 5, with the added recitation of generic computer components to execute instructions to perform the same abstract method steps of claim 5. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, and 7-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bowers et al. (US 10417577 B2, referred to as Bowers), in view of Kohavi et al. ( “Online Controlled Experiments at Large Scale”, referred to as Kohavi). Regarding claim 1, Bowers teaches, a computer-implemented method comprising: receiving, by a modeling manager, a schema from an experience module, the experience module implementing one or more features of an online service, the schema being a data structure that defines variables for an experiment, the modeling manager managing a plurality of machine-learning (ML) models (Col. 2, lines 65-67, cont. Col. 3, lines 1-37: Describes an input schema and an output schema for workflow/experiment.; Col. 4, lines 4-35: Describes an application service system providing application services via API/Web server/mobile service server, processing client requests in real time.; Col.4, lines 63-67 cont. Col.5, lines 1-14: Describes an experiment management engine that defines experiments/workflows and parameters. These correspond to receiving/using a schema for an experiment in an online-service context as the application service system (experience module) provides online application services via API/web/mobile servers and handles real-time client requests (live traffic). The machine learning system supports experiments defined as workflows, where workflows are configured to process input datasets consistent with an input schema and generate outputs consistent with an output schema. The schema is a data structure defining the variables/fields of experiment inputs (and outputs). The workflows operate on “input data” and “output data form the machine learning models”, which manages/uses the ML model(s) within the experiment framework.); Although Bowers teaches receiving, by a modeling manager, a schema…, the experience module implementing one or more features of an online service, the schema being a data structure that defines variables for an experiment, the modeling manager managing a plurality of machine-learning (ML) models Kohavi teaches receiving... from an experience module(Page 1173 Section 4.1: Describes an online experimentation system for an online service in which experiments are driven by configuration settings. “All systems in Bing are driven from configuration and an experiment is implemented as a change to the default configuration”, and further describes “A configuration API and tool enables experimenters to easily create the setting defining an experiment”. The setting defining an experiment corresponds to a schema/data structure defining variables for an experiment. The online service components providing user-facing features to implement multiple features online and the experiment management system consumes the configuration setting receives the schema for execution/management of the experiment.) It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined Bowers machine learning experiment management framework with Kohavi’s configuration driven online experimentation. Doing so would have enabled the system to create a scalable, modular definition of experiment variables for online-service features and to support efficient creation and execution of experiments using standardized configuration interfaces. Bowers in view of Kohavi, further teaches, providing, by the modeling manager, a first user interface (UI) based on the schema for entering parameter values for the experiment (Col. 10, lines 64-67 cont. Col. 11, lines 1-39 and FIG. 2: Describes a “definition interface” where the system presents a website to a user to enter parameters for an experiment. Which corresponds to giving a user a user interface for the current schema.); configuring one or more ML models from the plurality of ML models for the experiment based on the parameter values entered on the first UI (Bowers Col. 3, lines 19-37: Describes a workflow that includes data processing operators which perform training and evaluation of models.; Col. 10, lines 49-63: Describes that the system manages a repository, where it stores “one or more previously executed or created or currently running experiments”.; Col. 12 lines 35-67 cont. Col. 13, lines 1-4: Describes that the execution scheduler executes workflow runs defined by parameters. These correspond to parameter values being entered from a UI, which are used to configure workflow execution which includes machine learning models.); initializing the experiment (Bowers Col. 12, lines 35-67 cont. Col. 13, lines 1-4: Describes scheduling the workflow run, and starting the execution engine to run those workflows.); during the experiment, receiving, by the modeling manager, a request from the experience module for data associated with the experiment (Kohavi Page 1173 Section 4.1: Describes “As a request is received from a browser, Bing’s frontend servers assign each request to multiple flights” and “Each layer in the system logs information, including the request’s flight assignments, to system logs that are then processed and used for offline analysis”. These show that the online service receives request, the assignment system routes those request and then the experiment system processes those requests. It further details that “All systems in Bing are driven from configuration and an experiment is implemented as a change to the default configuration”, which shows runtime interaction between the service and the experiment system.); selecting, by the modeling manager, one of the configured ML models for providing a response to the request (Bowers Col. 7 lines 15-49 and Col. 12, lines 47-67 cont. Col. 13, lines 1-28: Describes that workflows include data processing operators implementing machine learning functionality and that the workflow execution engine executes workflow runs. The scheduler determines which configured workflow run is executed and workflows include machine learning operators. This corresponds to selecting one of the configured machine learning models for execution in response to a request. ); getting the response from the selected ML model based on input provided to the ML model based on the request (Bowers Col. 2 lines 65-67 cont. Col. 3, lines 1-37: Describes that workflows process one or more outputs and that machine learning models generate output data. This corresponds to getting a response from a selected machine learning model based on input provided to that model.); sending, by the modeling manager, the response to the experience(Kohavi Page 1173 Section 4.1: Describes an online service architecture in which frontend servers receive browser requests and route those requests through an experimentation system. Because the experimentation system modifies system behavior based on experiment configuration, the processed result is returned to the frontend for delivery to the user.); and providing a second UI for presenting results of the experiment(Bowers Col. 3, lines 59-67 cont. Col. 4 lines 1-3 and FIG. 7B-F, Col. 15, lines 8-58: Describes generating and presenting experiment results via automatically generated visualizations displayed through a user interfaces. These visualizations are presented through a user distinct from the parameter definition interface. This corresponds to providing a second UI for presenting results of the experiment.). Regarding claim 2, Bowers in view of Kohavi teaches, the method as recited in claim 1. Bowers further teaches, wherein the modeling manager comprises a common configuration and common infrastructures for managing the plurality of ML models(Col. 1, lines 48-67 cont. Col. 2, lines 1-18: Shows centralized infrastructure used to manage all experiments/workflows. It states that workflows are for creating/using machine learning models.; Colo. 15, lines 23-40: Shows that the workflows operate in a shared infrastructure ;Col. 16, lines 19-33: Describes that workflows are for creating/using machine learning models. The experiment management engine manages experiments and workflows within a machine learning system. The workflows are execution pipelines used to create, modify, evaluate, validate, and/or utilize one or more machine learning models. Experiments and workflows are managed via a common user interface and/or application programming interface executed on dedicated computer tiers. The experiment management interface queries a workflow repository and experiment repository shared across workflow runs. ). Regarding claim 3, Bowers in view of Kohavi teaches, the method as recited in claim 1. Kohavi further teaches, wherein configuring the one or more models comprises: assigning a percentage of requests served by each of the models during the experiment (Page 1168-1169, Sections 1 and 1.1, and Page 1173, Section 4.1: Describes configuring an experiment by allocating traffic among variants. In controlled experiments, “users are randomly split between the variants” and provides “an experiment utilizing 20% of eligible users (10% control, 10% treatment)” which assigns percentages to the respective variants. Kohavi further describes request-level routing, where “As a request is received from a browser, Bing’s frontend servers assign each request to multiple flights”, where a flight is a variant to which a user/.request is exposed. The variants include backend components such as “relevance rankers” (model-type components) that are experimented with, configuring models/variants by assigning a percentage of requests/traffic served by each model during the experiment.). Regarding claim 4, Bowers in view of Kohavi teaches, the method as recited in claim 1. Bowers in view of Kohavi, further teaches, wherein the experiment is for defining text for a notification to be sent to a user (Kohavi Page 1169, Section 1.1: Describes running online controlled experiments where users are split between variants that provide different user-facing content, including textual changes such as “improving search result captions” and different ads layouts between control and treatment. This corresponds to an experiment used to define/modify text presented to users. Kohavi does not expressly teach a notification, rather it teaches defining/altering user-facing text content generally.), wherein each of the configured ML models provides the text for the notification (Bowser, Col. 2, lines 48-67 cont. Col. 13, lines 1-18:Describes that workflows within an experiment “utilize one or more machine learning models”, including “post-processing of output data from the machine learning models”, and that experiments/workflows process input datasets into outputs. This corresponds to configured machine learning models producing outputs during an experiment. Where the experiment output is text content, each configured machine learning model provides the text output for the user facing content) based on user identification (ID) and segment ID (Kohavi Page 1173, Section 4.1, and Pages 1174-1175, Section 5.1: Describes that experiment assignment is performed consistently using a hash of an :”anonymous user id” and further describes analyzing impacts on “specific user segments”. This corresponds to using user identification in assignment and using segmentation in the experimentation context, where segmentation is used for analysis/handling of cohorts.). It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined Bower’s experiment management engine, machine learning infrastructure, with Kohavi’s online controlled experimentation. Doing so would enable the systems workflow/models to generate experiment outputs that are served to users under controlled traffic assignments, improving the ability to run, manage and analyze large-scale experiments in an online service. Regarding claim 5, Bowers in view of Kohavi teaches, the method as recited in claim 1. Bowers in view of Kohavi, further teaches, wherein the experiment is for providing multiple options for text on a webpage (Kohavi, Page 1169 Section 1.1: Describes running online controlled experiments in which different variants present different user-visible content on webpages, including textual modifications such as improving search result captions. Users are randomly split between variants that provide different webpage layouts or textual content.), wherein the schema defines a control value and one or more variants as the multiple options for the text on the webpage (Kohavi, Pages 1168-1169 Section 1, and 1.1: Describes controlled experiments that include a control and one or more treatment variants. ; Bowers, Col. 3, lines 19-37: Describes defining experiment parameters via structured schemas associated with workflows, where input dataset are processed consistent with an input schema. ). It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined Bower’s schema-based framework, with Kohavi’s webpage experiments. Doing so would enable the system to control value and one or more variants as multiple options for webpage text. Regarding claim 7, Bowers in view of Kohavi teaches, the method as recited in claim 1. Bowers in view of Kohavi, further teaches, wherein initializing the experiment comprises: notifying an experiment tracking system of a configuration for the experiment (Bowers, Col. 4, lines 63-67 cont. Col. 5 lines 1-40 and Col. 6, lines 28-50:Describes an experiment management engine that manages experiments and workflows within a machine learning system. A definition interface enables a user to define parameters associated with a new workflow run, and that experiment information is stored in repositories accessible by the experiment management interface. When a workflow run is created and initiated, the experiment configuration/definition is communicated to and managed by the experiment management engine.), wherein the second UI is provided by the experiment tracking system (Kohavi, Page 1173, Section 4.1: Describes an experiment management system (Control Tower) and an offline analysis pipeline that generates scorecards summarizing experiment results. The scorecards constitute a results presentation interface(a second UI) provided by the experiment tracking/management/analysis system. ). It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined Bower’s experiment management engine, with Kohavi’s experiment tracking and results reporting. Doing so would enable the system to initialize an experiment by registering/notifying an experiment tracking system of the experiment’s configuration and provide results interface via the experiment tracking system to present experiment outcomes. Regarding claim 8, Bowers in view of Kohavi teaches, the method as recited in claim 1. Bowers in view of Kohavi, further teaches, wherein the request comprises a user identifier (ID) of a user associated with a communication being sent to the user (Kohavi Page 1169, Section 1.1, and Page 1173, Section 4.1: Descries that in a controlled online experiments, a pseudo-random hash of an “anonymous user id” is used to ensure consistent assignment of users to experimental variants. Requests are received form browsers in the online system. A requested is processed during an experiment includes a user identifier used for assignment to experimental variants. Online controlled experiments involve processing browser requests and delivering user-facing content based on experimental assignment. Because the system processes requests associated with a particular user identifier and delivers corresponding content to that user, the request comprises a user ID associated with a communication (the delivered content)sent to the user.). Regarding claim 9, Bowers in view of Kohavi teaches, the method as recited in claim 1. Bowers in view of Kohavi, further teaches, wherein the request comprises a segment (ID) for a segment of users (Kohavi Page 1173, Section 4.1, and Pages 1174-1175, Section 5.1: Describes conducting online controlled experiments that analyze and evaluate impacts on specific user segments, disclosing segmentation of users within the experimentation framework. It processes request associated with users and assigns users to experimental variants. Because segmentation is used tin the experimentation system, this corresponds to requests being processed within the experiment, which include information identifying the user’s segment in order to enable the segment-based assignment and analysis.). Regarding claim 10, Bowers in view of Kohavi teaches, the method as recited in claim 1. Bowers in view of Kohavi, further teaches, wherein the modeling manager manages training of the plurality of ML models (Bowers, Col. 2, lines 65-67 cont. Col. 3, lines 1-18, and Col.6, lines 28-58: Describes that workflows within the machine learning system create, modify, evaluate, validate, and/or utilize one or more machine learning models, and that the experiment management engine manages such workflows. Because creating and modifying machine learning models includes training operations, this correspond to the modeling manager managing training of a plurality of machine learning models.), wherein the plurality of ML models is available to a plurality of experience modules (Bowers, Col. 5, lines 37-67 cont. Col. 6, lines 1-27: Describes an application service system including multiple application services (application services 102A and 102B) that interact with a centralized machine learning system. The machine learning system includes workflows that create and utilize multiple machine learning models, and because the machine learning system operates within and supports multiple application services, the plurality of machine learning models is available to a plurality of experience modules (application services).). Regarding claims 11-15, which recites substantially the same limitations as claims 1-5 and further recites a system comprising: a memory comprising instructions; and one or more computer processors (Bowers Col. 13, lines 52-67 cont. Col. 16, lines 1-15: Describes that the system can comprise of generic computer hardware including processors and software to execute the instructions of their methods.) to perform the method steps of claims 1-5, respectively, and are rejected for the same reasons as described above. Regarding claims 16-20, which recites substantially the same limitations as claims 1-5 and further recites a non-transitory machine-readable storage medium including instructions (Bowers Col. 13, lines 52-67 cont. Col. 16, lines 1-15: Describes that the system can comprise of generic computer hardware including “volatile memory may be considered “non-transitory” in the sense that it is not a transitory signal” and software to execute the instructions of their methods.) to perform the method steps of claims 1-5, respectively, and are rejected for the same reasons as described above. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bowers et al. (US 10417577 B2, referred to as Bowers), in view of Kohavi et al. ( “Online Controlled Experiments at Large Scale”, referred to as Kohavi), in view of Mozilla Developer Network ("Browser actions", referred to as MDN). Regarding claim 6, Bowers in view of Kohavi teaches, the method as recited in claim 1. Bowers in view of Kohavi, further teaches, wherein the first UI is provided as a browser extension that provides a toolbar presented with a user feed webpage (Bowers, Col. 2, lines 48-67 cont. Col. 3, lines 1-18: Describes managing experiments and workflows via a user interface executed within an online system environment.; Kohavi, Page 1173, Section 4.1: Describes online controlled experimentation systems operating within web-based frontend architectures. Providing the first user interface as a browser extension that represents a toolbar over a user feed webpage.). Although Bowers in view of Kohvai teaches a first UI is provided ...that provides a toolbar presented with a user feed webpage. They do not teach, as a browser extension. MDN teaches, as a browser extension ( Describes “A browser action is a button you can add to the browser toolbar. Users can click the button to interact with your extension” the browser action is a toolbar button presented in the browser UI while webpages are displayed, the toolbar is presented with a user feed webpage when the user navigates to such a webpage in the browser) It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined Bower’s configuration interface, with MDN’s browser extension toolbar. Doing so would have enabled the system to improve accessibility and usability by allowing the user to invoke the UI while viewing a target webpage without modifying the underlaying webpage or requiring a sperate application window. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached PTO-892 for additional art including: US 11710076 B2: experiment creation input US 20220414548 A1: runtime serving US 10522002 B1: machine learning algorithms Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONALD T RODEN whose telephone number is (571)272-6441. The examiner can normally be reached Mon-Thur 8:00-5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.T.R./Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Jun 29, 2023
Application Filed
Mar 06, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month