Prosecution Insights
Last updated: April 19, 2026
Application No. 18/253,138

DECISION FLOWCHART-BASED ENVIRONMENTAL MODELING METHOD AND APPARATUS, AND ELECTRONIC DEVICE

Non-Final OA §101§103
Filed
May 16, 2023
Examiner
MINOR, AYANNA YVETTE
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Polixir Technologies Limited
OA Round
3 (Non-Final)
18%
Grant Probability
At Risk
3-4
OA Rounds
3y 6m
To Grant
43%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
33 granted / 179 resolved
-33.6% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
47 currently pending
Career history
226
Total Applications
across all art units

Statute-Specific Performance

§101
37.9%
-2.1% vs TC avg
§103
33.6%
-6.4% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 179 resolved cases

Office Action

§101 §103
DETAILED ACTION Acknowledgement This non-final office action is in response to the request for continued examination (RCE) filed on 02/03/2026. Status of Claims Claims 3, 14, and 22 have been canceled. Claims 1, 5-6, 11-12, 16-17, and 24-25 have been amended. Claims 26-28 have been added. Claims 1-2, 4-7, 11-13, 15-18, 21, and 23-28 are now pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/03/2026 has been entered. Response to Arguments Applicant's arguments filed on 02/03/2026 regarding the 35 U.S.C. 101 and 103 rejections of the claims have been fully considered. The Applicant argues the following. (1) As per the 101 rejection, the Applicant argues, in summary, that (i) The claims do not merely recite a "mental process" because they encompass technical features beyond the alleged abstract idea. Various steps are executed by the electronic device, and the human mind is not equipped to perform these claim limitations; and (ii) The claimed concept has practical application in conjunction with a particular machine or manufacture. The method defined in the present application relates to the field of computer technology and provides an "unconventional" technical solution for solving the technical problems in the related art. The actual target business environment can be replaced with the target virtual environment model for reinforcement learning, and the target decision model is applied to the target business scenario for decision making. Most importantly, during the whole process, the normal use of the user is not interfered, the target business scenario can be more accurately described, and the decision making of the target decision model can be more efficient and accurate, thereby greatly reducing the cost of trial-and-errors in the actual target business environment, improving the reinforcement learning effect, and satisfying the personalized needs of the user. The Examiner respectfully disagrees. The Examiner submits that claim 1 as amended is directed to the abstract groupings of Mental Processes and Certain Methods of Organizing Human Activity because the claims describes a process of training and optimizing a model to simulate a business decision/scenario via construction of flowcharts and graphs, which can be practically performed by a human mentally with pen and paper. Mental Processes include claims directed to collecting information, analyzing it, and displaying certain results of the collection and analysis even if they are claimed as being performed on a computer. The claims also describe using the models to perform business decisions, recommend and display order of items according to a user search request, and determine order allocation with a shortest pickup time for pickup persons. These actions are considered advice and/or workflow instructions for a person to follow thus managing their personal behavior. Certain Methods of Organizing Human Activity which encompasses managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions. As per MPEP 2106.04(a), a claim recites a judicial exception when the judicial exception is “set forth” or “described” in the claim. The Examiner maintains that position that the additional elements recited in the claims and listed in Steps 2A(2) and 2B do not integrate the abstract idea into a practical application because the additional elements do not improve the functioning of a computer or improve upon another technology or technical field. The additional elements reflect computing components and instructions used to implement and perform the abstract idea on a computer. The improvements argued by the Applicant is in modelling, decision making, and costs, which are not technological improvements. The Applicant has not identified a specific technological improvement beyond the use of certain technology (e.g. computer, reinforcement learning, etc.) to perform an abstract process. Per MPEP 2106.05 (a), if it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. An indication that the claimed invention provides an improvement can include a discussion in the specification that identifies a technical problem and explains the details of an unconventional technical solution expressed in the claim, or identifies technical improvements realized by the claim over the prior art. The Applicant’s specification lacks evidence of a technological improvement or an unconventional technical solution. Therefore, the 35 U.S.C. 101 rejection is maintained. (2) As per the 103 rejections, the Applicant argues, in summary, that Vu alone or in any proper combination with Jin fails to disclose, suggest, or render obvious of all the limitations of amended claim 1. The Examiner finds the Applicant’s arguments persuasive. Therefore, the previous 103 rejections have been withdrawn. However, upon further search and consideration, a new ground of 103 rejection is made. See details below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-7, 11-13, 15-18, 21, and 23-28 are rejected under 35 U.S.C. 101 because the claimed invention, “Decision Flowchart-Based Environmental Modeling Method and Apparatus and Electronic Device”, is directed to abstract ideas, specifically Mental Processes and Certain Methods of Organizing Human Activity, without significantly more. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements individually or in combination provide mere instructions to implement the abstract idea on a computer. Step 1: Claims 1-2, 4-7, 11-13, 15-18, 21, and 23-28 are directed to a statutory category, namely a process (claims 1-2, 4-7, and 26), a machine (claims 11, 13, 15-18 and 27), and a manufacture (claim 12, 21, 23-25, and 28). Step 2A (1): Independent claims 1, 11, and 12 are directed to an abstract idea of Mental Processes, based on the following claim limitations: “acquiring a target business feature in a target business scenario to be modeled and feature information of the target business feature, wherein the target business scenario comprises an item search scenario and a pickup and order allocation scenario; constructing, based on the target business feature, a target decision flowchart corresponding to the target business scenario, wherein business nodes in the target decision flowchart comprise at least one environment state node, at least one decision agent node, and at least one static variable node for representing a fixed business feature in the target business scenario, wherein the at least one environment state node comprises a current environment state child node, an environment state transition child node, and a next environment state child node, each of the at least one environment agent node supports input of a data flow and output of a data flow, and each of the at least one static variable node only supports output of a data flow and does not support input of a data flow; constructing a target computation graph based on a business feature bound to each of the business nodes in the target decision flowchart and data flow information among the plurality of business nodes in the target decision flowchart; performing environmental modeling based on the target computation graph and the feature information of the target business features to determine a target virtual environment model corresponding to the target business scenario, wherein the target virtual environment model is configured to simulate an operation of a real environment in the target business scenario;…; and performing a business decision by utilizing the target decision model under the target business scenario, wherein in response to the target business scenario being the item search scenario, performing the business decision by utilizing the target decision model under the target business scenario comprises: determining, based on output of the target decision model, information about recommended items and a display order of the recommended items according to a search request input by a user; and in response to the target business scenario being the pickup and order allocation scenario, performing the business decision by utilizing the target decision model under the target business scenario comprises: determining, based on output of the target decision model, an order allocation manner with a shortest pickup time for pickup persons; wherein before constructing the target computation graph based on the business feature bound to each of the business nodes in the target decision flowchart and the data flow information among the plurality of business nodes in the target decision flowchart, the method further comprises: obtaining inputted business configuration information about each of the business nodes in the target decision flowchart for configuring a node data type, a data value range, and information about an inserted function, wherein the inserted function comprises a function constructed based on expert experiences”. These claim limitations describe a process of training and optimizing a model to simulate a business decision/scenario via construction of flowcharts and graphs, which can be practically performed by a human mentally with pen and paper. The claims also describe using the models to perform business decisions, recommend and display order of items according to a user search request, and determine order allocation with a shortest pickup time for pickup persons. These actions are considered advice and/or workflow instructions for a person to follow thus managing their personal behavior. Dependent claims 2, 4-7, 13, 15-18, 21, 23-28 further describe the modelling process, components (e.g. nodes, data), features, and construction of the models, flowcharts, and graphs. Training a model that involves fitting a mathematical model to a particular dataset by adjusting coefficients or weights in the model to provide a specific output can practically be performed mentally by a human with pen and paper. The claims do not recite a specific type of model that would exclude the performance by a human. These limitations, under the broadest reasonable interpretation, fall within the abstract groupings of Mental Processes which include concepts performed in the human mind such as observations, evaluations, judgments, and opinions and Certain Methods of Organizing Human Activity which encompasses managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions. Mental Processes include claims directed to collecting information, analyzing it, and displaying certain results of the collection and analysis even if they are claimed as being performed on a computer. Certain Methods of Organizing Human Activity can encompass the activity of a single person (e.g. a person following a set of instructions), activity that involve multiple people (e.g. a commercial interaction), and certain activity between a person and a computer (e.g. a method of anonymous loan shopping). Therefore, claims 1-2, 4-7, 11-13, 15-18, 21, and 23-28 are directed to an abstract idea and are not patent eligible. Step 2A (2): This judicial exception is not integrated into a practical application. In particular, claims 1, 5, 11, 12, 16, and 24 recite additional elements of “replacing the target business scenario with the target virtual environment model to perform reinforcement learning on a preset decision model in the target business scenario, and using the preset decision model after the reinforcement learning as a target decision model (claims 1, 11, and 12); a visualization interface (claims 5, 16, and 24); an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, wherein the computer program is executed by the at least one processor to enable the at least one processor to perform (claim 11); and a non-transitory computer-readable storage medium, comprising a computer program which, when executed by a processor, enables the processor to perform (claim 12)”. These additional elements do not integrate the abstract idea into a practical application because the claims do not recite (a) an improvement to another technology or technical field and (b) an improvement to the functioning of the computer itself and (c) implementing the abstract idea with or by use of a particular machine, (d) effecting a particular transformation or reduction of an article, or (e) applying the judicial exception in some other meaningful way beyond generally linking the use of an abstract idea to a particular technological environment. These additional elements evaluated individually and in combination are viewed as computing and display devices that are used to perform the abstract process of training and optimizing a model to simulate a business decision/scenario via construction of flowcharts and graphs. Limitations that recite mere instructions to implement an abstract idea on a computer or merely uses a computer as a tool to perform an abstract idea are not indicative of integration into a practical application (see MPEP 2106.05(f)). Therefore, claims 1-2, 4-7, 11-13, 15-18, 21, and 23-28 do not include individual or a combination of additional elements that integrate the judicial exception into a practical application and thus are not patent eligible. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claims 1, 5, 11, 12, 16, and 24 recite additional elements of “replacing the target business scenario with the target virtual environment model to perform reinforcement learning on a preset decision model in the target business scenario, and using the preset decision model after the reinforcement learning as a target decision model (claims 1, 11, and 12); a visualization interface (claims 5, 16, and 24); an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, wherein the computer program is executed by the at least one processor to enable the at least one processor to perform (claim 11); and a non-transitory computer-readable storage medium, comprising a computer program which, when executed by a processor, enables the processor to perform (claim 12)”. These additional elements evaluated individually and in combination are viewed as mere instructions to apply or implement the abstract idea on a computer. The use of reinforcement learning/machine learning to train models are considered instructions to apply or implement a model on a computer. Applying an abstract idea on a computer does not integrate a judicial exception into a practical application or provide an inventive concept (see MPEP 2106.05(f)). Therefore, claims 1-2, 4-7, 11-13, 15-18, 21, and 23-28 do not include individual or a combination of additional elements that are sufficient to amount to significantly more than the judicial exception and thus are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4-7, 11-13, 15-18, 21, and 23-28 are rejected under 35 U.S.C. 103 as being unpatentable over Vu et al. (US 2022/0358388 A1) in view of Jin et al. (US 2022/0292586 A1) in further view of Tong et al. (US 2023/0131681 A1), and in further view of Achin et al. (US 2023/0083891 A1). As per claims 1, 11, and 12 (Currently Amended), Vu teaches a decision flowchart-based environmental modeling method, comprising (Vu e.g. Methods and systems for generating an environment include training transformer models from tabular data and relationship information about the training data (Abstract and [0004]). An environment that is generated from data and high-level domain knowledge may be used to simulate different decision-making policies [0023]. FIG. 3 is a block/flow diagram of a method for generating and using an environment, using a decision optimization transformer graph [0012].); Vu teaches an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, wherein the computer program is executed by the at least one processor to enable the at least one processor to perform the following (Vu e.g. A system for generating an environment includes a hardware processor a memory that stores a computer program product. When the computer program product is executed by the hardware processor, it causes the hardware processor to...[0005]. Fig. 7 environment generation system 700 is shown. The system 700 includes a hardware processor 702 and a memory 706. The system 700 may further include functional components which may each be implemented as software that is stored in the memory 706 and that is executed by the hardware processor 702 to perform its respective function [0074].): Vu teaches a non-transitory computer-readable storage medium, comprising a computer program which, when executed by a processor, enables the processor to perform the following (Vu e.g. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium ( or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention ([0060]-[0061]).) Vu teaches acquiring a target business feature in a target business scenario to be modeled and feature information of the target business feature… (Vu e.g. FIG. 3 shows a method for generating and using an environment [0043]. Block 302 obtains training data that reflects scenarios that may be taken account in the generated environment. In some cases, the training data may be generated by a user. In other cases, the training data may be drawn from real-world measurements [0043]. This training data may include a wide variety of different scenarios, each providing high level information about the data, including relationships between the state of the scenario and actions that may be performed [0043]. Fig. 4 is an example of obtaining training data in block 302. Block 402 generates tables of training data. These tables include at least one variable that represents a state of a system and at least one variable that represents an action that is taken in a system. These variables may include, for example, observed and unobserved state variables, action variables, and cost or reward variables. The table shows states and actions taken at different times within the system [0047]. The data may be generated by hand, or may be recorded according to the operation of an agent in a real-world environment. For example, the inventory data of FIG. 1 may track a real example of inventory management that captures the dynamics of such an environment [0048].), Vu teaches constructing, based on the target business feature, a target decision flowchart corresponding to the target business scenario, wherein business nodes in the target decision flowchart comprise at least one environment state node, at least one decision agent node, at least one environment agent node,…, wherein the at least one environment state node comprises a current environment state child node, an environment state transition child node, and a next environment state child node, each of the at least one environment agent node supports input of a data flow and output of a data flow;… (Vu e.g. Environments may instead be represented as a set of rules that dictate how various agent actions affect the state of the agent and of the environment itself [0025]. In this example, an agent 102 moves within a space that is occupied by various obstacles 106. The agent 102 has a goal of reaching an exit 104. With each action that the agent 102 performs, such as moving within the environment 100, a reward may be determined [0026]. The agent 102 may obtain varying types of information about the environment [0027]. High-level information may be provided by a user, for example specifying rules that govern interactions between the agent and the state of the environment [0031]. The high-level information that describes general properties of the environment may be pre-determined by a user. For example, the high-level information may include a definition that indicates particular variables as being physical or belief states, actions, responses, and rewards or costs [0033]. The action variables may be decided by an agent at the start of each time step. The reward or cost variables may represent values that are incurred at the end of each time step [0033]. FIG. 2 is a directed, acyclic graph 200 shown to represent an exemplary decision optimization transformer model [0039]. Each node 202 in the graph 200 may have a response (e.g., a target of the respective transformer pipeline), covariates (e.g., predictors of the respective transformer pipeline), a reference time start, and a reference time window [0041]. FIG. 3 shows a method for generating and using an environment [0043]. Block 304 constructs a decision-optimization transformer graph using the training data [0044]. This training data may include a wide variety of different scenarios, each providing high level information about the data, including relationships between the state of the scenario and actions that may be performed [0043]. Block 308 then uses the generated environment. In one example, the generated environment may be used as an input to a reinforcement learning system, where a model may be trained to guide an agent through the generated environment [0045]. The high-level information may also indicate how far back in time each variable is considered, and may capture state-transition dynamics [0034]. The decision-optimization transformer may include multiple machine learning transformers (described herein as transformer pipelines), which may be connected in various arrangements to output an automatically generated environment [0038]. The high-level information is used to generate the general structure of a model that represents the relationships between the variables of the tabular data [0052]. FIG. 5, additional detail is provided for an example of constructing the decision-optimization transformer graph in block 304 is shown [0052]. Block 504 builds a directed, acyclic graph from the transformer pipelines. The structure of the graph may be determined, at least in part, by interdependencies between transformer pipelines [0054].) Vu teaches constructing a target computation graph based on a business feature bound to each of the business nodes in the target decision flowchart and data flow information among a plurality of business nodes in the target decision flowchart; (Vu e.g. Block 306 uses the decision-optimization transformer graph to generate an environment, for example by traversing the graph to select a subset of transformer pipelines in the graph. Block 306 may determine one or more topological orderings within the directed acyclic graph. Environment may be constructed by traversing the graph according to topological orderings, node by node [0044]. Each node 202 represents a different transformer pipeline. Each of these transformer pipelines may be trained using a respective set of training data and respective high-level information [0039]. An exemplary transformer pipeline that may be learned from the above example is: i,i+1=F(it, dt, ot, o,t-1), where it is the inventory at time t, dt, is the demand at time t, ot, is an order amount at time t, and F is a function that represents the learned model to predict a next inventory i,t+ i [0035]. A machine learning transformer may be understood as a function that takes first data as an input and outputs second data [0037]. In some cases, an environment may be generated by traversing to a leaf node, with the generated environment being the combination of each of the traversed nodes 202 [0040]. The graph is then used by an environment generator 716, which traverses the graph to build a set of environments [0076]. The environment determines a reward or result of the action, which the reinforcement learning system uses to adjust parameters of the model [0021]. An environment that is generated from data and high-level domain knowledge may be used to simulate different decision-making policies [0023].) Vu teaches performing environmental modeling based on the target computation graph and the feature information of the target business features to determine a target virtual environment model corresponding to the target business scenario (Vu e.g. An environment that is generated from data and high-level domain knowledge may be used to simulate different decision-making policies [0023]. Such policies may include specific sets of rules or mappings that decisionmakers may follow to determine how to act in any given state of the system [0023]. The generated environment may also have a modular structure, since the environment may be created by combining multiple machine learning pipelines and orchestrating their calculations using a directed acyclic graph [0024]. Environments may instead be represented as a set of rules that dictate how various agent actions affect the state of the agent and of the environment itself [0025]. The generated environment may be used to test the performance of different policies [0045]. These environments are used by a downstream task 718, for example to be used in training a reinforcement learning model or in testing the efficacy of a decision policy in various circumstances [0076].), wherein the target virtual environment model is configured to simulate an operation of a real environment in the target business scenario; (Vu e.g. An environment that is generated from data and high-level domain knowledge may be used to simulate different decision-making policies [0023]. FIG. 3 shows a method for generating and using an environment [0043]. Block 302 obtains training data that reflects scenarios that may be taken account in the generated environment. In some cases, the training data may be generated by a user. In other cases, the training data may be drawn from real-world measurements [0043]. The data may be generated by hand, or may be recorded according to the operation of an agent in a real-world environment. For example, the inventory data of FIG. 1 may track a real example of inventory management that captures the dynamics of such an environment [0048].) Vu teaches replacing the target business scenario with the target virtual environment model to perform reinforcement learning on a preset decision model in the target business scenario, and using the preset decision model after the reinforcement learning as a target decision model; and performing a business decision by utilizing the target decision model under the target business scenario…(Vu e.g. The present invention relates to the automated generation of environment information that may be used for various purposes, such as reinforcement learning [0001]. Certain machine learning techniques, such as reinforcement learning, may use a predetermined environment, where an agent's actions within the predetermined environment, and the results of those actions, are used to train a model for future behavior [0002]. A reinforcement learning system may use training data that includes a set of objects or values that have predetermined relationships between them [0002]. During training, an agent may interact with the environment. Feedback from the environment, for example in the form of a reward value that is generated for each agent action, may be used to alter a policy that the agent uses to decide on its next action. Thus, by allowing the agent to explore the environment, a decision-making policy may be automatically generated [0002]. For example, reinforcement learning systems may train a machine learning model or artificial intelligence model, such as an agent, using actions that are performed within the context of the environment [0021]. The environment determines a reward or result of the action, which the reinforcement learning system uses to adjust parameters of the model [0021]. When the trained model is subsequently used, it will navigate through a new environment in a manner that is guided by the actions and rewards that took place during training [0021]. For example, a reinforcement learning environment may be created automatically from a high-level application specification and tabular data for an inventory control system. In such an application, agents interact with the generated environment and learn to decide the right amounts to order to refill an inventory level, given a forecast of demand in the near future [0022]. An environment that is generated from data and high-level domain knowledge may be used to simulate different decision-making policies [0023]. FIG. 1 is an example of an environment 100 that may be used in reinforcement learning [0025].) Vu does not explicitly teach, however, Jin teaches the following: Jin teaches wherein the target business scenario comprises an item search scenario and a pickup and order allocation scenario; and …wherein in response to the target business scenario being the item search scenario, performing the business decision by utilizing the target decision model under the target business scenario comprises: determining, based on output of the target decision model, information about recommended items and a display order of the recommended items according to a search request input by a user; (Jin e.g. The present disclosure generally relates to computerized systems and methods for computer-determined item correlation and prioritization [0001]. Order fulfillment centers frequently rely on complex computerized algorithms to identify optimal routing of pickers in storage areas or throughout a geographic region. These algorithms attempt to reduce the amount of time required to collect goods in preparation of packing and shipping [0003]. Referring to FIG. 1A, a schematic block diagram 100 illustrating an exemplary embodiment of a system comprising computerized systems for communications enabling shipping, transportation, and logistics operations is shown [0026]. In embodiments where system 100 enables the presentation of systems to enable users to place an order for an item, external front end system 103 may be implemented as a web server that receives search requests, presents item pages, and solicits payment information [0028]. A user device (e.g., using mobile device 102A or computer 102B) may navigate to external front end system 103 and request a search by entering information into a search box. External front end system 103 may request information from one or more systems in system 100. For example, external front end system 103 may request information from fulfillment optimization (FO) System 113 that satisfies the search request [0030]. External front end system 103 may prepare an SRP (e.g., FIG. 1B) based on the information. The SRP may include information that satisfies the search request. For example, this may include pictures of products that satisfy the search request. The SRP may also include respective prices for each product, or information relating to enhanced delivery options for each product, PDD, weight, size, offers discounts, or the like [0031]. FIG. 1B depicts a sample Search Result Page (SRP) that includes one or more search results satisfying a search request along with interactive user interface elements, consistent with the disclosed embodiments [0011]. A user device may then select a product from the SRP, e.g., by clicking or tapping a user interface, or using another input device, to select a product represented on the SRP [0032]. The user device may formulate a request for information on the selected product and send it to external front end system 103. In response, external front end system 103 may request information related to the selected product [0032]. The information could also include recommendations for similar products (based on, for example, big data and/or machine learning analysis of customers who bought this product and at least one other product), answers to frequently asked questions, reviews from customers, manufacturer information, pictures, or the like [0032]. External front end system 103 may prepare an SDP (Single Detail Page) (e.g., FIG. 1C) based on the received product information. The SDP may further include a list of sellers that offer the product. The list may be ordered based on the price each seller offers such that the seller that offers to sell the product at the lowest price may be listed at the top. The list may also be ordered based on the seller ranking such that the highest ranked seller may be listed at the top. The seller ranking may be formulated based on multiple factors, including, for example, the seller's past track record of meeting a promised PDD [0033].) and in response to the target business scenario being the pickup and order allocation scenario, performing the business decision by utilizing the target decision model under the target business scenario comprises: determining, based on output of the target decision model, an order allocation manner with a shortest pickup time for pickup persons; (Jin e.g. FIG. 3 illustrates an exemplary embodiment of a method for item correlation in a data structure. FIG. 3 shows steps of a process 300. FO system 113 may perform process 300 to produce data structures for use in managing picking operations [0070]. In step 302, FO system 113 begins with receiving an indication of an order comprising at least one item [0070]. At step 308, FO system 113 begins to iteratively, for items in the order, correlate items and pickers in a data structure [0074]. In step 308, FO system 113 identifies a picker closest to the item, the picker having a current job priority [0075]. FO system 113 may employ algorithms to determine the shortest route and expected travel time between two points while traveling around any obstacles such as shelves, pillars, walls, or doors as reflected in a stored map of a warehouse [0075]. In some embodiments, FO system 113 may determine the shortest path along highways and surface streets, as well as distances for parking, walking, or other modes of transportation [0075]. At step 310, FO system 113 correlates the closest picker and the item in a data structure. User devices of pickers may display items according to respective item ordinalities, indicating the order in which a picker should locate items [0077]. Ordinality may also be based on a picking sequence optimized to reduce overall picking time. For example, in some embodiments, FO system 113 may determine a shortest path connecting each of the items correlated to a picker, and assign ordinality to the items based on the determination. FO system 113 may employ other algorithms as well, such as making a random selection of items, and iteratively eliminating path crossover points of a path until no more crossovers remain [0078].) The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify Vu’s method and system for generating an environment that reflects the real-world scenarios to include a business scenario that comprises an item search and a pickup and order allocation scenario as taught by Jin in order to efficiently analyze various combinations of packaging, picker routings and picker assignments to arrive at optimized picker tasking without delaying assignment times, increases in picking operation efficiency, while time decreases, thereby reducing overall business costs and improving customer satisfaction (Jin e.g. [0005]). Vu nor Jin explicitly teach, however, Tong teaches the following: Tong teaches business nodes in the target decision flowchart comprise at least one static variable node for representing a fixed business feature in the target business scenario,…and each of the at least one static variable node only supports output of a data flow and does not support input of a data flow (Tong e.g. A computer-implemented method includes analyzing, by a processing unit, a relational database to discover a plurality of static relationships between a plurality of data fields captured in two or more tables (Abstract). The processing unit can build one or more relation graphs based on the static relationships and the entity relationships to link a plurality of nodes with one or more edges that define at least one relationship between the nodes (Abstract). In the example of FIG. 2, relation graph 204 includes multiple input nodes 206a, 206b, 206c with input relationships to a key node 208. The key node 208 has input relationships to output nodes 210a, 210b. The relation graph 204 is an example of a multiple input and multiple output class graph defining how various data types relate to each other [0034]. Further, optimization suggestions can be generated when extracting an entity model that summarizes interactions between nodes and sub-graphs of the relation graphs. Relationships can be summarized as a semantic model that defines how stored data values map to real-world parameters of an entity or business using the data [0025]. FIG. 1 depicts a system 100 according to one or more embodiments of the present invention. The system 100 includes a relation graph builder 102, a relation result selector 110, and a data structure converter 120. The relation graph builder 102 can include a static relationship collection module 104, an entity relationship analysis module 106, and an association rule mining module 108 [0026]. FIG. 3 depicts a block diagram 300 of a portion of the relation graph builder 102 of FIG. 1 according to one or more embodiments of the present invention. The static relationship collection module 104 of the relation graph builder 102 can collect and analyze the static relationships from source code 134, any available SQL sources, and database design information from the relational database 130 [0037]. The static relationships can include nodes from a table definition, relationships from foreign key information, join SQL relationships from view and trigger information, and/or source code 134. The static relationship collection module 104 can include a table builder 302, a foreign key link builder 304, and SQL analysis 306 [0037]. The Examiner submits that nodes 206a-c in relation graph in Fig. 2 can represent static nodes that only supports output of data flow.) The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify Vu and Jin’s method and system for generating an environment that reflects the real-world scenarios to include static variable nodes for representing a fixed business feature in a relationship graph as taught by Tongo in order to test cases/trigger and discover intrinsic or conditional entity relationships amongst data (Tong e.g. [0024]). While Vu teaches constructing the target computation graph based on the business feature bound to each of the business nodes in the target decision flowchart and the data flow information among the plurality of business nodes in the target decision flowchart, the method further comprises as shown above, Vu, Jin, nor Tong explicitly teach, however Achin teaches herein before constructing…obtaining inputted business configuration information about each of the business nodes in the target decision flowchart for configuring a node data type, a data value range, and information about an inserted function, wherein the inserted function comprises a function constructed based on expert experiences (Achin e.g. The system may provide, for rendering by a user device, a graphical user interface comprising nodes corresponding to workflow code. The system may display a GUI that allows the end user to generate a workflow. The workflow may include different nodes where each node represents one analytical protocol, such as machine learning model or other analytical protocols (e.g., statistical analysis or any if-then statement or logic). Each node defines what data should be analyzed and how the data should be analyzed, such that the analyzed data (eventually) leads to a decision (Fig. 1 and [0033]). The GUI 216 indicates that the node 204 is configured to score the data set with a deployed lending club model (machine learning model) and the new scores are then going to be populated in a new column called "default predictions." (Fig. 2D and [0040]). Data examples can include financial or demographic [0030], historical customer data [0035], datasets that include variables of various data types (e.g., numerical, ordinal, categorical, interpreted (e.g., date, time, text), etc.), datasets that include variables with various statistical properties (e.g., statistical properties relating to the variable's missing values, cardinality, distribution, etc.), etc. [0078], and data types of the data set's variables (e.g., numerical, ordinal, categorical, or interpreted (e.g., date, time, text, etc.); the ranges of the dataset's numerical variables; the number of classes for the dataset's ordinal and categorical variables; etc. [0085]. Figs. 2F, 2G, 2I, and 3D are example interfaces that allow configuration of nodes and to specify data types, values, ranges, etc. The system may monitor a placement of each node within the workflow (e.g., whether a node is placed before another node), data assigned to each node (e.g., data indicated by the end user to be used/analyzed within the workflow), and models indicated by each node (e.g., deployment nodes that indicate how data can be analyzed) [0046]. The end user may select a deployment node that indicates what data should be analyzed (and how it should be analyzed) by one or more nodes/models of the workflow [0037].) The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify Vu in view of Jin, and Tong’s method and system for generating an environment to include a visualization interface that allows a user to modify and/or design (e.g. add and configure nodes, identify data types, etc.) a workflow graph as taught by Achin in order to allow users to analyze data and generate a decision via interaction with input elements of a graphical user interface (Achin e.g. Abstract). As per claims 2 (Original), 13 (Previously Presented), and 21 (Previously Presented), Vu in view of Jin, Tong, and Achin teach the method according to claim 1, the electronic device according to claim 11, and the non-transitory computer-readable storage medium according to claim 12, Vu teaches wherein the current environment state child node supports output of a data flow, the environment state transition child node supports input of a data flow and outputs a data flow to the next environment state child node, and each of the at least one decision agent node supports input of a data flow and output of a data flow. (Vu e.g. A decision optimization transformer model may be generated that includes multiple transformer pipelines, each of which can be used to generate a different component of an environment [0020]. A machine learning transformer may be understood as a function that takes first data as an input and outputs second data [0037]. The generated environment may also have a modular structure, since the environment may be created by combining multiple machine learning pipelines and orchestrating their calculations using a directed acyclic graph [0024]. Environments may instead be represented as a set of rules that dictate how various agent actions affect the state of the agent and of the environment itself [0025]. FIG. 1 is an example of an environment 100 that may be used in reinforcement learning [0025]. The environment 100 may be generated as a combination of multiple transformer pipelines, where each pipeline model may represent a different kind of environment information and potential interactions [0028]. For example, Table 1 includes tabular data for an inventory control environment [0029]. This information includes different states of the inventory control environment at different times, with a physical state of the environment being represented as the inventory, a believe state of the environment being represented as a demand forecast, and agent's actions being represented as the "order amount." [0030]. High-level information may be provided by a user, for example specifying rules that govern interactions between the agent and the state of the environment [0031]. The functional relationship between these quantities may be learned from the input tabular data, using any of a variety of machine learning models. Exemplary forms of model may include linear regression models, logistic regression models, decision tree learning, support vector machines, random forest models, or any other appropriate machine learning model form [0032].) As per claims 4 (Original), 15 (Previously Presented), and 23 (Previously Presented), Vu in view of Jin, Tong, and Achin teach the method according to claim 1, the electronic device according to claim 11, and the non-transitory computer-readable storage medium according to claim 12, Vu teaches wherein a plurality of target business features are provided, and constructing, based on the target business feature, the target decision flowchart corresponding to the target business scenario comprises: performing a feature analysis on the plurality of target business features to determine a dependency relation among the plurality of target business features; and creating, based on the dependency relation, the plurality of business nodes and determining the data flow information among the plurality of business nodes to construct the target decision flowchart corresponding to the target business scenario. (Vu e.g. Table 1 includes tabular data for an inventory control environment [0029]. The functional relationship between these quantities may be learned from the input tabular data, using any of a variety of machine learning models [0032]. Block 504 builds a directed, acyclic graph from the transformer pipelines. The structure of the graph may be determined, at least in part, by interdependencies between transformer pipelines [0054]. A training data transformer 707 may operate on the tabular data 710 to flatten time sensitive columns, as described above, to capture time dependencies in the tabular data 710 in individual columns [0075].) As per claims 5 (Currently Amended), 16 (Currently Amended), and 24 (Currently Amended), Vu in view of Jin, Tong, and Achin teach the method according to claim 1, the electronic device according to claim 11, and the non-transitory computer-readable storage medium according to claim 12, wherein constructing, based on the target business feature, the target decision flowchart corresponding to the target business scenario comprises: Vu, Jin, nor Tong explicitly teach, however, Achin teaches acquiring, based on a node addition operation triggered by a user on a visualization interface, a plurality of empty nodes added by the user; (Achin e.g. Methods and systems to generate and revise a workflow that utilizes machine learning model nodes and other analytical nodes to analyze data and generate a decision via allowing a user to interact with input elements of a graphical user interface (Abstract). The server can include receiving, via the graphical user interface, an indication to add a node having at least one analytical protocol comprising at least one logical rule [0007]. Fig. 1 ) Vu, Jin, nor Tong explicitly teach, however, Achin teaches determining, based on a node information configuration operation triggered by the user for each empty node of the plurality of empty nodes, business configuration information corresponding to the each empty node, wherein the business configuration information further comprises node name information and a business feature bound to a node; (Achin e.g. The system may display a GUI that allows the end user to generate a workflow [0033]. The workflow may include different nodes where each node represents one analytical protocol, such as machine learning model or other analytical protocols (e.g., statistical analysis or any if-then statement or logic). Each node defines what data should be analyzed and how the data should be analyzed, such that the analyzed data (eventually) leads to a decision [0033]. The system allows end users to visually create and/or edit nodes and their corresponding logic using the visual input elements discussed herein [0042]. For instance, as depicted in FIG. 2E, when the end user hovers over the rule set node 218, the graphical elements 220 appear that allow the end user to delete or edit the rule set node 218 [0042].Fig. 2F shows node name (e.g. loan-grade) and output feature value (e.g. A) associated with node 218.) Vu, Jin, nor Tong explicitly teach, however, Achin teaches configuring a corresponding empty node based on the business configuration information to obtain a corresponding business node; and (Achin e.g. The workflow may include different nodes where each node represents one analytical protocol, such as machine learning model or other analytical protocols (e.g., statistical analysis or any if-then statement or logic). Each node defines what data should be analyzed and how the data should be analyzed, such that the analyzed data (eventually) leads to a decision [0033]. The end user may then input or identify data to be analyzed by different nodes within the workflow [0027]. Using the input elements, the end user may visually connect different nodes within the workflow. For instance, the end user may use various input elements discussed herein to generate (e.g., place) nodes within a workflow and connect the node with other nodes [0037]. The system allows end users to visually create and/or edit nodes and their corresponding logic using the visual input elements discussed herein [0042].) Vu, Jin, nor Tong explicitly teach, however, Achin teaches acquiring, based on a connection operation triggered by the user for a plurality of business nodes, data flow information among the plurality of business nodes to construct the target decision flowchart corresponding to the target business scenario. (Achin e.g. The system may then execute code corresponding to the workflow and analyze the data accordingly. The system may then output the results in accordance with the end user's preferences and instructions [0027]. The workflow 200B also indicates an order that dictates how data can be analyzed. For instance, as depicted, the data retrieved from the input node 202, is analyzed by the node 204 and 206. At node 208, the analyzed data is then split into different grades (nodes 210a-d) and eventually into the decision node 212 [0036]. The end user may select a deployment node that indicates what data should be analyzed (and how it should be analyzed) by one or more nodes/models of the workflow [0037]. Using various input elements discussed herein, the end user may connect different nodes, such that the data flows through different nodes and eventually flows into a decision node [0038]. FIG. 1, at step 130, the system may determine to add code corresponding to the revised nodes to the code of the workflow [0046]. At step 140, the system may revise code associated with the workflow by adding the analytical protocols to the workflow code before execution of the decision node [0047]. The system may generate machine readable code that corresponds to the workflow that has been visually created by the end user [0047].) The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify Vu in view of Jin and Tong’s method and system for generating an environment to include a visualization interface that allows a user to modify and/or design (e.g. add and configure nodes) a workflow graph as taught by Achin in order to allow users to analyze data and generate a decision via interaction with input elements of a graphical user interface (Achin e.g. Abstract). As per claims 6 (Currently Amended), 17 (Currently Amended), and 25 (Currently Amended), Vu in view of Jin, Tong, and Achin teach the method according to claim 5 and the electronic device according to claim 16, Vu, Jin, nor Tong explicitly teach, however, Achin teaches wherein the node data type comprises a continuous type, a discrete type, and a default type, wherein the discrete type comprises a discrete ordered type and a discrete unordered type. (Achin e.g. Each node defines what data should be analyzed and how the data should be analyzed, such that the analyzed data (eventually) leads to a decision [0033]. The GUI 216 indicates that the node 204 is configured to score the data set with a deployed lending club model (machine learning model) and the new scores are then going to be populated in a new column called "default predictions." (Fig. 2D and [0040]). Data examples can include financial or demographic [0030], historical customer data [0035], datasets that include variables of various data types (e.g., numerical, ordinal, categorical, interpreted (e.g., date, time, text), etc.), datasets that include variables with various statistical properties (e.g., statistical properties relating to the variable's missing values, cardinality, distribution, etc.), etc. [0078], and data types of the data set's variables (e.g., numerical, ordinal, categorical, or interpreted (e.g., date, time, text, etc.); the ranges of the dataset's numerical variables; the number of classes for the dataset's ordinal and categorical variables; etc. [0085]. Figs. 2F, 2G, 2I, and 3D are example interfaces that allow configuration of nodes and to specify data types, values, ranges, etc.) The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify Vu in view of Jin and Tong’s method and system for generating an environment to include a visualization interface that allows a user to modify and/or design (e.g. add and configure nodes, identify data types, etc.) a workflow graph as taught by Achin in order to allow users to analyze data and generate a decision via interaction with input elements of a graphical user interface (Achin e.g. Abstract). As per claims 7 (Original) and 18 (Previously Presented), Vu in view of Jin, Tong, and Achin teach the method according to claim 1 and the electronic device according to claim 11, Vu teaches wherein constructing the target computation graph based on the business feature bound to each of the business nodes in the target decision flowchart and the data flow information among the plurality of business nodes in the target decision flowchart comprises: Vu teaches performing a format conversion on the target decision flowchart to determine target decision data in a structured data format; and (Vu e.g. The high-level information may also indicate how far back in time each variable is considered, and may capture state-transition dynamics [0034]. The high-level information may be represented in any appropriate manner, such as using extensible markup language (XML) or another appropriate data interchange format [0034]. Block 404 generates high-level environment information that describes the information stored in the tabula data. The high-level environment information may be generated by a user, for example in a definition file that uses any appropriate markup or notation format [0049]. Block 406 may optionally transform the training tables to flatten time dependencies. The transformation of block 406 may learn a number of time steps that are useful for understanding the context of a current value, and may add that number of previous values as additional columns of an input. In this manner, the time-sensitive information may be captured in a single input, for use in a variety of different transformer types [0051].) Vu teaches determining, based on the business feature bound to each of the business nodes and the data flow information among the plurality of business nodes in the target decision data, a plurality of computation nodes and a computation relation among the plurality of computation nodes to construct the target computation graph. (Vu e.g. Table 1 includes tabular data for an inventory control environment [0029]. The functional relationship between these quantities may be learned from the input tabular data, using any of a variety of machine learning models. Exemplary forms of model may include linear regression models, logistic regression models, decision tree learning, support vector machines, random forest models, or any other appropriate machine learning model form [0032]. The high-level information that describes general properties of the environment may be pre-determined by a user. For example, the high-level information may include a definition that indicates particular variables as being physical or belief states, actions, responses, and rewards or costs [0033]. The high-level information may also indicate how far back in time each variable is considered, and may capture state-transition dynamics [0034]. The high-level information is used to generate the general structure of a model that represents the relationships between the variables of the tabular data [0052].) As per claims 26 (New), 27 (New), and 28 (New), Vu in view of Jin, Tong, and Achin teach the decision flowchart-based environmental modeling method according to claim 1, wherein performing the environmental modeling based on the target computation graph and the feature information of the target business feature to determine the target virtual environment model corresponding to the target business scenario comprises: Vu teaches creating an initial virtual environment model based on the target computation graph; (Vu e.g. FIG. 3 is a block/flow diagram of a method for generating and using an environment, using a decision optimization transformer graph [0012]. An environment that is generated from data and high-level domain knowledge may be used to simulate different decision-making policies [0023]. The environment 100 may be generated as a combination of multiple transformer pipelines, where each pipeline model may represent a different kind of environment information and potential interactions [0028].) Vu teaches determining, based on the feature information of the target business feature, interaction sample data and an actual trajectory corresponding to the interaction sample data; (Vu e.g. The environment 100 may be generated as a combination of multiple transformer pipelines, where each pipeline model may represent a different kind of environment information and potential interactions [0028]. High-level information may be provided by a user, for example specifying rules that govern interactions between the agent and the state of the environment [0031]. Block 302 obtains training data that reflects scenarios that may be taken account in the generated environment. In some cases, the training data may be generated by a user. In other cases, the training data may be drawn from real-world measurements [0043]. The data may be generated by hand, or may be recorded according to the operation of an agent in a real-world environment. For example, the inventory data of FIG. 1 may track a real example of inventory management that captures the dynamics of such an environment [0048].) Vu teaches inputting the interaction sample data into the initial virtual environment model and obtaining a simulation trajectory based on output of the initial virtual environment model; and (Vu e.g. An environment that is generated from data and high-level domain knowledge may be used to simulate different decision-making policies [0023]. Block 308 then uses the generated environment. In one example, the generated environment may be used as an input to a reinforcement learning system, where a model may be trained to guide an agent through the generated environment [0045]. In another example, the generated environment may be used to test the performance of different policies [0045]. These environments are used by a downstream task 718, for example to be used in training a reinforcement learning model or in testing the efficacy of a decision policy in various circumstances [0076].) Vu teaches determining a trajectory similarity based on the simulation trajectory and the actual trajectory, training the initial virtual environment model by adjusting parameter weights in the initial virtual environment model according to the trajectory similarity, until the trajectory similarity reaches a preset convergence condition, to obtain the target virtual environment model corresponding to the target business scenario, wherein the trajectory similarity is used for characterizing a difference between a virtual environment and the real environment, and the trajectory similarity is positively associated with a degree of the virtual environment approximating to the real environment. (Vu e.g. Certain machine learning techniques, such as reinforcement learning, may use a predetermined environment, where an agent's actions within the predetermined environment, and the results of those actions, are used to train a model for future behavior [0002]. During training, an agent may interact with the environment. Feedback from the environment, for example in the form of a reward value that is generated for each agent action, may be used to alter a policy that the agent uses to decide on its next action. Thus, by allowing the agent to explore the environment, a decision-making policy may be automatically generated [0002]. For example, reinforcement learning systems may train a machine learning model or artificial intelligence model, such as an agent, using actions that are performed within the context of the environment. The environment determines a reward or result of the action, which the reinforcement learning system uses to adjust parameters of the model. When the trained model is subsequently used, it will navigate through a new environment in a manner that is guided by the actions and rewards that took place during training [0021]. The overall performance of the policies may then be compared to rank and select preferred policies. Similarly, the environments may help with comparing, ranking, and selecting a preferred set of actions that a decisionmaker may take, corresponding to any specific state that the decision-maker finds the system in [0023]. The transformers of the graph may be implements as, for example, artificial neural networks (ANNs). ANNs are further trained using a set of training data, with learning that involves adjustments to weights that exist between the neurons [0077]. To train an ANN, training data can be divided into a training set and a testing set. The training data includes pairs of an input and a known output. During training, the inputs of the training set are fed into the ANN using feed-forward propagation. After each input, the output of the ANN is compared to the respective known output. Discrepancies between the output of the ANN and the known output that is associated with that particular input are used to generate an error value, which may be backpropagated through the ANN, after which the weight values of the ANN may be updated. This process continues until the pairs in the training set are exhausted [0081]. After the training has been completed, the ANN may be tested against the testing set, to ensure that the training has not resulted in overfitting. If the ANN can generalize to new inputs, beyond those which it was already trained on, then it is ready for use. If the ANN does not accurately reproduce the known outputs of the testing set, then additional training data may be needed, or hyperparameters of the ANN may need to be adjusted [0082].) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure include FOR: Liao, H. (CN-112394922-B) "Decision Configuration Method, Business Decision Method and Decision Engine System" and NPL: B. Simsek, S. Albayrak and A. Korth, "Reinforcement Learning for Procurement Agents of Factory of the Future," Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753), Portland, OR, USA, 2004, pp. 1331-1337 Vol.2, doi: 10.1109/CEC.2004.1331051. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ayanna Minor whose telephone number is (571)272-3605. The examiner can normally be reached M-F 9am-5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.M./Examiner, Art Unit 3624 /Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624
Read full office action

Prosecution Timeline

May 16, 2023
Application Filed
May 22, 2025
Non-Final Rejection — §101, §103
Aug 29, 2025
Response Filed
Nov 01, 2025
Final Rejection — §101, §103
Feb 03, 2026
Request for Continued Examination
Feb 20, 2026
Response after Non-Final Action
Mar 02, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12556890
ACTIVE TRANSPORT BASED NOTIFICATIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12518234
CONVERSATIONAL BUSINESS TOOL
2y 5m to grant Granted Jan 06, 2026
Patent 12455761
TECHNIQUES FOR WORKFLOW ANALYSIS AND DESIGN TASK OPTIMIZATION
2y 5m to grant Granted Oct 28, 2025
Patent 12450542
CONVERSATIONAL BUSINESS TOOL
2y 5m to grant Granted Oct 21, 2025
Patent 12450543
CONVERSATIONAL BUSINESS TOOL
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
18%
Grant Probability
43%
With Interview (+24.7%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 179 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month