Prosecution Insights
Last updated: April 19, 2026
Application No. 18/142,898

Generative Future Predictions based on Complex Events

Non-Final OA §101§102§103§112
Filed
May 03, 2023
Examiner
GERMICK, JOHNATHAN R
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
47%
Grant Probability
Moderate
1-2
OA Rounds
4y 2m
To Grant
79%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
43 granted / 91 resolved
-7.7% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
28 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 91 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is responsive to the Application filed on 05/03/2023. Claims 1-20 are pending in the case. Claims 1, 7, and 20 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-19 rejected under 35 U.S.C. 101 because the claim are directed to an abstract idea without significantly more. Regarding Claim 1: Under step 1, the claim is directed to a method which is directed to a process, one of the statutory categories. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations: encoding data values from a first source with events from a second source as output time-series seed data …to learn temporal dynamics of the output time-series seed data generating synthetic time-series data that follows a step-wise temporal dynamic of the output time-series seed data … competitively compares and ranks the synthetic time-series data; generating predictions of future data values from relatively high- ranking synthetic time-series data from the temporal sequential encoder. Each of these amount to mental evaluation because they describe manipulation of abstract data. Encoding and generating data and predictions are manipulations of data which can be performed in the mind. Learning, comparison and ranking of data is also an organization of data which can be performed in the mind. Under step 2A Prong 2, The claim recites the following additional element(s): training a time generative network with the output time-series seed… applying a temporal sequential encoder (which amounts to descriptions which makes use of or applies the abstract idea because under 2106.05(f)(1) “the claim fails to recite details of how a solution to a problem is accomplished”, as no details of the functioning of the training or encoder are claimed.) Therefore, the claim is directed to a judicial exception. Under step 2B, the recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 2: The rejection of claim 1 is incorporated and further: The claim does not recite further abstract idea to consider, beyond those recited in the parent claim. The claim recites the following additional element(s), in addition to those already identified in the parent claim: wherein training a time generative network comprises training a time generative adversarial network. (which is generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h)) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 3: The rejection of claim 1 is incorporated and further: The claim does not recite further abstract idea to consider, beyond those recited in the parent claim. The claim recites the following additional element(s), in addition to those already identified in the parent claim: applying a reinforcement learning process. (which is generally linking the use of the judicial exception to a particular technological environment or field of use. No , see MPEP 2106.05(h)) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 4: The rejection of claim 1 is incorporated and further: The claim does not recite further abstract idea to consider, beyond those recited in the parent claim. The claim recites the following additional element(s), in addition to those already identified in the parent claim: presenting the generated predictions on a user interface. (which amounts to adding insignificant extra-solution activity to the judicial exception, because the limitation describe mere data gathering and/or data output, See MPEP 2106.05(g) Under step 2B, the additional elements of presenting the generated predictions on a user interface are insignificant extra-solution activities that are considered well-understood, routine, conventional activities. In accordance with the MPEP, the following factual determination is based on the technical publication: [Planas et al., " Towards a model-driven approach for multiexperience AI-based user interfaces (PTO-892)]. Section 2.2 “CUIs are becoming more and more popular every day. The most relevant example is the rise of bots …which are being increasingly adopted in various domains such as e-commerce or customer service, as a direct communication channel between companies and end-users. A bot wraps a CUI as key component but complements it with a behavior specification that defines how the bot should react to a given user message….Bots are classified in different types depending on the channel employed to communicate with the user. For instance, in chatbots the user interaction is through textual messages… a bot are usually designed as a set of intents, where each intent represents a possible user’s goal when interacting with the bot… and finally, the bot produces a response that it is returned to the user via text”. Which discloses that conversational user interfaces are popular and increasingly adopted (which corresponds to routine and conventional), further the reference notes that these CUIs interact via textual messages returned to a user which are predictions of user intent. ( corresponding to presenting the generated predictions on a user interface) As such, the insignificant extra-solution activities are considered well-understood, routine, conventional activities. Regarding Claim 5: The rejection of claim 4 is incorporated and further: The claim does not recite further abstract idea to consider, beyond those recited in the parent claim. The claim recites the following additional element(s), in addition to those already identified in the parent claim: presenting is performed responsive to a query received from a user (which amounts to adding insignificant extra-solution activity to the judicial exception, because the limitation describes mere data gathering and/or data output, See MPEP 2106.05(g) Under step 2B, the additional element of presenting is performed responsive to a query received from a user. are insignificant extra-solution activities that are considered well-understood, routine, conventional activities. In accordance with the MPEP, the following factual determination is based on the technical publication: [Planas et al., " Towards a model-driven approach for multiexperience AI-based user interfaces (PTO-892)]. Section 2.2 “CUIs are becoming more and more popular every day. The most relevant example is the rise of bots …which are being increasingly adopted in various domains such as e-commerce or customer service, as a direct communication channel between companies and end-users. A bot wraps a CUI as key component but complements it with a behavior specification that defines how the bot should react to a given user message….Bots are classified in different types depending on the channel employed to communicate with the user. For instance, in chatbots the user interaction is through textual messages… a bot are usually designed as a set of intents, where each intent represents a possible user’s goal when interacting with the bot… and finally, the bot produces a response that it is returned to the user (via text” Which discloses that conversational user interfaces are popular and increasingly adopted (which corresponds to routine and conventional), further the reference notes that these CUIs are responsive to user interactions or queries via textual messages returned to a user which are predictions of user intent. ( corresponding to presenting is performed responsive to a query received from a user) As such, the insignificant extra-solution activities are considered well-understood, routine, conventional activities. Regarding Claim 6: The rejection of claim 5 is incorporated and further: The claim does not recite further abstract idea to consider, beyond those recited in the parent claim. The claim recites the following additional element(s), in addition to those already identified in the parent claim: the presenting comprises a quantitative graph and/or a natural language answer to the query from the user (which amounts to adding insignificant extra-solution activity to the judicial exception, because the limitation describes mere data gathering and/or data output, See MPEP 2106.05(g) Under step 2B, the additional element of the presenting comprises a quantitative graph and/or a natural language answer to the query from the user. are insignificant extra-solution activities that are considered well-understood, routine, conventional activities. In accordance with the MPEP, the following factual determination is based on the technical publication: [Planas et al., " Towards a model-driven approach for multiexperience AI-based user interfaces (PTO-892)]. Section 2.2 “CUIs are becoming more and more popular every day. The most relevant example is the rise of bots …which are being increasingly adopted in various domains such as e-commerce or customer service, as a direct communication channel between companies and end-users. A bot wraps a CUI as key component but complements it with a behavior specification that defines how the bot should react to a given user message….Bots are classified in different types depending on the channel employed to communicate with the user. For instance, in chatbots the user interaction is through textual messages… a bot are usually designed as a set of intents, where each intent represents a possible user’s goal when interacting with the bot… and finally, the bot produces a response that it is returned to the user (via text”. Which discloses that conversational user interfaces are popular and increasingly adopted (which corresponds to routine and conventional), further the reference notes that these CUIs are responsive to user interactions or queries via textual messages returned to a user which are natural language answers to a user interaction or query. ( corresponding to the presenting comprises a quantitative graph and/or a natural language answer to the query from the user) As such, the insignificant extra-solution activities are considered well-understood, routine, conventional activities. Regarding Claim 7: Under step 1, the claim is directed to a computing system which is directed to a machine, one of the statutory categories. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations: …instantiate a generative network and a temporal sequential encoder …to model temporal transition dynamics of time-series data to associated complex events; …to reason noisy observations associated with the model and to control generation of future predictions by the model. Each of these amount to mental evaluation because they describe manipulation of abstract data. Reasoning and modeling are abstractions about abstract data which can be performed in the mind. Further, instantiation and control in this context amount to selectable configuration choices no details about how the instantiation nor control are provided to suggest these are not choices about configuration which can be determined in the mind. Under step 2A Prong 2, The claim recites the following additional element(s): a processor; and a storage resource storing computer-readable instructions which, when executed by the processor, cause the processor to… the generative network configured… and, the temporal sequential encoder configured (which amounts to adding the words “apply it” to implement an abstract idea on a generic computer, See 2106.05(f)(1) “the claim fails to recite details of how a solution to a problem is accomplished”) Therefore, the claim is directed to a judicial exception. Under step 2B, the recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 8: The rejection of claim 7 is incorporated and further: The claim does not recite further abstract idea to consider, beyond those recited in the parent claim. The claim recites the following additional element(s), in addition to those already identified in the parent claim: wherein the generative network comprises a time generative adversarial network or wherein the generative network comprises a seed based generative decoder. (which is generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h)) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 9: The rejection of claim 8 is incorporated and further: The claim recites further abstract idea(s): produce possible future predictions of the time-series data and associated complex events. (which describes a mental evaluation) The claim recites the following additional element(s), in addition to those already identified in the parent claim: the time generative adversarial network comprises a generator. (which is generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h)) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 10: The rejection of claim 9 is incorporated and further: The claim does not recite further abstract idea to consider, beyond those recited in the parent claim. The claim recites the following additional element(s), in addition to those already identified in the parent claim: the time generative adversarial network comprises a discriminator (which is generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h)) configured to receive the possible future predictions and enhance accuracy of the generator. (that amounts to adding insignificant extra-solution activity to the judicial exception, because the limitation describe mere data gathering. See MPEP 2106.05(g)) Further under step 2B, the additional element configured to receive the possible future predictions and enhance accuracy of the generator. (is well understood, routine, and conventional activity because it amounts to “transmitting or receiving data over a network" (see MPEP 2106.05(d)(II)(i)) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 11: The rejection of claim 10 is incorporated and further: The claim does not recite further abstract idea to consider, beyond those recited in the parent claim. The claim recites the following additional element(s), in addition to those already identified in the parent claim: wherein the discriminator is configured to enhance the accuracy via adversarial training (which is generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h)) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 12: The rejection of claim 10 is incorporated and further: The claim does not recite further abstract idea to consider, beyond those recited in the parent claim. The claim recites the following additional element(s), in addition to those already identified in the parent claim: the temporal sequential encoder comprises a reinforcement learning agent or wherein the temporal sequential encoder comprises diffusion encoders, time-series- based encoders, or transformer encoders. (which is generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h)) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 13: The rejection of claim 12 is incorporated and further: The claim recites further abstract idea(s): to produce an action that identifies how close the time-series data is to the associated complex events.(which describes a mental evaluation) The claim recites the following additional element(s), in addition to those already identified in the parent claim: wherein the reinforcement learning agent is configured … and the reinforcement learning agent is configured (which amounts to adding the words “apply it” to implement an abstract idea on a generic computer, See 2106.05(f)(1) “the claim fails to recite details of how a solution to a problem is accomplished”) to receive rewards and states based upon the time-series data (that amounts to adding insignificant extra-solution activity to the judicial exception, because the limitation describe mere data gathering. See MPEP 2106.05(g)) Further under step 2B, the additional element to receive rewards and states based upon the time-series data (is well understood, routine, and conventional activity because it amounts to “transmitting or receiving data over a network" (see MPEP 2106.05(d)(II)(i)) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 14: The rejection of claim 13 is incorporated and further: The claim recites further abstract idea(s): cause seeds to be generated from the action. (which describes a mental evaluation) The claim recites the following additional element(s), in addition to those already identified in the parent claim: the reinforcement learning agent is configured to (which amounts to adding the words “apply it” to implement an abstract idea on a generic computer, See 2106.05(f)(1) “the claim fails to recite details of how a solution to a problem is accomplished”) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 15: The rejection of claim 10 is incorporated and further: The claim does not recite further abstract idea to consider, beyond those recited in the parent claim. The claim recites the following additional element(s), in addition to those already identified in the parent claim: the seeds comprise a variable that represents a relationship between the time-series data and associated complex events. (which is generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h)) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 16: The rejection of claim 15 is incorporated and further: The claim recites further abstract idea(s): iteratively refine the model with the seeds to enhance accuracy of the future predictions. (which describes a mental evaluation because refinements broadly include mental evaluations/determinations about parameters of the model) The claim recites the following additional element(s), in addition to those already identified in the parent claim: wherein the generative network is configured (which amounts to adding the words “apply it” to implement an abstract idea on a generic computer, See 2106.05(f)(1) “the claim fails to recite details of how a solution to a problem is accomplished”) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 17: The rejection of claim 16 is incorporated and further: The claim recites further abstract idea(s): control the generator's output by manipulating the seeds. (which describes a mental evaluation because manipulation of seeds amounts to abstract data manipulation) The claim recites the following additional element(s), in addition to those already identified in the parent claim: the reinforcement learning agent is configured to (which amounts to adding the words “apply it” to implement an abstract idea on a generic computer, See 2106.05(f)(1) “the claim fails to recite details of how a solution to a problem is accomplished”) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 18: The rejection of claim 16 is incorporated and further: The claim recites further abstract idea(s): provide a latent space for information abstraction that allows latent dynamics of both real and synthetic time-series data to be synchronized through a supervised loss. (which describes a mental evaluation because information abstractions is abstract data manipulation) The claim recites the following additional element(s), in addition to those already identified in the parent claim: the generative network comprises an embedding function configured to (which amounts to adding the words “apply it” to implement an abstract idea on a generic computer, See 2106.05(f)(1) “the claim fails to recite details of how a solution to a problem is accomplished”) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 19: The rejection of claim 18 is incorporated and further: The claim recites further abstract idea(s): behavior shaping and distance adjustments are applied to the model to decrease deltas between possible future predictions and actual values in the time-series data. (which describes a mental evaluation because distance adjustments and behavior shaping is abstract data manipulation) The recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “relatively high- ranking synthetic time-series data” in claim 1 is a relative term which renders the claim indefinite. The term “relatively high ranking” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term “complex events” and “noisy observations” in claim 7 is a relative term which renders the claim indefinite. The terms are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term “complex events” in claim 9, 13, 15 and 20 is a relative term which renders the claim indefinite. The terms are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Each of the dependent claims (2-6, 8-19) are rejected by virtue of dependency on a rejected base claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 7-13 and 20 is/are rejected under 35 U.S.C. 102(1) as being anticipated by Kobayashi “Situated GAIL: Multitask imitation using task-conditioned adversarial inverse reinforcement learning” Claim 1 Kobayashi teaches claim 1 Kobayashi teaches, encoding data values from a first source with events from a second source as output time-series seed data; (pg 9-10 “The generator network had five input nodes: two nodes corresponding to the agent’s state and three nodes corresponding to the task variable c, and four output nodes corresponding to the agent’s action” the generator encodes the two types of inputs (states and actions) which corresponds to values from a first source and events from a second source. Further the output time series seed data is the output. pg 4 “At a discrete time t, an agent observes a state st and selects an action at according to the agent’s policy” The states and actions correspond to sampled data at discrete times, this temporal data.) training a time generative network with the output time-series seed data to learn temporal dynamics of the output time-series seed data (pg 10 “The discriminator network had nine input nodes for the agent’s action, state, and task variable.” Pg 5 “GAIL considers this optimization problem as the learning of a discriminator and a generator. The learning rule of GAN can then be applied where w and θ are the discriminator and generator parameters… D(·) is the output of the discriminator;” the generator is trained according to the learning function. The optimization function uses the output time-series seed data from the generator. Implicitly the learned parameters reflect the temporal dynamics of the output.) generating synthetic time-series data that follows a step-wise temporal dynamic of the output time-series seed data; applying a temporal sequential encoder that competitively compares and ranks the synthetic time-series data; generating predictions of future data values from relatively high- ranking synthetic time-series data from the temporal sequential encoder. (pg 4 Section 3 “Let the tuple (S,A,P,R,γ,ρ0,T) be a finite-horizon Markov decision process (MDP), where S and A are the state and action spaces respectively, and P : S×A×S → R is the state transition probability of the system dynamics. At a discrete time t, an agent observes a state st and selects an action at according to the agent’s policy” pg 5 “The discriminator learns to correctly identify whether the distribution that generated the state–action pair is a generator or an expert. The generator learns to output the selection probability of the action so that the discriminator confuses the generator’s trajectories with those of the expert” the generator generates a state action pair at a discrete time thus following a step-wise temporal dynamic. The discriminator via identifying the distribution competitively compares and ranks the synthetic time series. The generated state action is future data values which are high ranking synthetic time series.) Claim 2 Kobayashi teaches claim 1 Kobayashi teaches, training a time generative network comprises training a time generative adversarial network. (pg 3 “That is, the generator learns to produce behaviors similar to those presented by an expert, while the discriminator learns to discriminate the output of the generator from the expert’s behaviors. This competitive learning framework based on the architecture of GAN ensures that it has a unique optimal cost function and policy” pg 5 “GAIL considers this optimization problem as the learning of a discriminator and a generator. The learning rule of GAN…” The GAIL system uses a GAN for training/learning which is a time generative adversarial network.) Claim 3 Kobayashi teaches, applying a temporal sequential encoder comprises applying a reinforcement learning process. (pg 10 Citation and figure 3 “Figure 3. Structures of the generator, discriminator, and value function networks.” pg 9 Algorithm 1 “ PNG media_image1.png 204 928 media_image1.png Greyscale ” the policy gradient algorithm, a reinforcement learning process, which applies the value function, corresponding to the temporal sequential encoder.) Claim 7 Kobayashi teaches claim 1 Kobayashi teaches, A computing system comprising: a processor; and a storage resource storing computer-readable instructions which, when executed by the processor, cause the processor to ( pg 13 “The second experiment was conducted using a robot-arm simulator. In this experiment, we examined whether the proposed method can learn to imitate robot-arm reaching behavior in a continuous space. We used the Reacher-v2 environment provided by the OpenAI Gym platform using the MuJoCo physical simulator” the system is implemented in OpenAI Gym platform which requires a processor a storage as claimed.) instantiate a generative network and a temporal sequential encoder (pg 9 algorithm 1 PNG media_image2.png 152 1016 media_image2.png Greyscale the generator and value function, corresponding to the generative network and temporal sequential encoder, are instantiated with initial parameters.) the generative network configured to model temporal transition dynamics of time-series data to associated complex events; (pg 9-10 “The generator network had five input nodes: two nodes corresponding to the agent’s state and three nodes corresponding to the task variable c, and four output nodes corresponding to the agent’s action” Pg 5 “GAIL considers this optimization problem as the learning of a discriminator and a generator. The learning rule of GAN can then be applied where w and θ are the discriminator and generator parameters” the parameters of the generative network models the transition dynamics of the states and action which correspond to the time series data and the events respectively.) and, the temporal sequential encoder configured to reason noisy observations associated with the model and to control generation of future predictions by the model. ( Pg 5 “GAIL considers this optimization problem as the learning of a discriminator and a generator. The learning rule of GAN can then be applied where w and θ are the discriminator and generator parameters…The discriminator learns to correctly identify whether the distribution that generated the state–action pair is a generator or an expert” the discriminator identifies the distributions which is reasoning about the noisy observations associated with the generated predictions. Because the discriminator and generator is learned jointly the discriminator in part controls generation of future predictions.) Claim 8 Kobayashi teaches claim 7 Kobayashi teaches, wherein the generative network comprises a time generative adversarial network or wherein the generative network comprises a seed based generative decoder (pg 3 “That is, the generator learns to produce behaviors similar to those presented by an expert, while the discriminator learns to discriminate the output of the generator from the expert’s behaviors. This competitive learning framework based on the architecture of GAN ensures that it has a unique optimal cost function and policy” pg 5 “GAIL considers this optimization problem as the learning of a discriminator and a generator. The learning rule of GAN…” The GAIL system uses a GAN for training/learning which is a time generative adversarial network.) Claim 9 Kobayashi teaches claim 8 Kobayashi teaches, wherein the time generative adversarial network comprises a generator configured to produce possible future predictions of the time-series data and associated complex events. (pg 3 “That is, the generator learns to produce behaviors similar to those presented by an expert” pg 5 “The discriminator learns to correctly identify whether the distribution that generated the state–action pair is a generator or an expert. The generator learns to output the selection probability of the action so that the discriminator confuses the generator’s trajectories with those of the expert” the generator generates a distribution of future predictions of the time series action which are associated with the complex events.) Claim 10 Kobayashi teaches claim 9 Kobayashi teaches, wherein the time generative adversarial network comprises a discriminator configured to receive the possible future predictions and enhance accuracy of the generator. (pg 3 “That is, the generator learns to produce behaviors similar to those presented by an expert, while the discriminator learns to discriminate the output of the generator from the expert’s behaviors. This competitive learning framework based on the architecture of GAN ensures that it has a unique optimal cost function and policy” pg 5 “GAIL considers this optimization problem as the learning of a discriminator and a generator. The learning rule of GAN… PNG media_image3.png 110 583 media_image3.png Greyscale …The generator learns to output the selection probability of the action so that the discriminator confuses the generator’s trajectories with those of the expert.” the learning rule is a min max problem which enhances the future predictions of the generator by minimizing the error) Claim 11 Kobayashi teaches claim 10 Kobayashi teaches, wherein the time generative adversarial network comprises a discriminator configured to receive the possible future predictions and enhance accuracy of the generator. (pg 3 “That is, the generator learns to produce behaviors similar to those presented by an expert, while the discriminator learns to discriminate the output of the generator from the expert’s behaviors. This competitive learning framework based on the architecture of GAN ensures that it has a unique optimal cost function and policy” pg 5 “GAIL considers this optimization problem as the learning of a discriminator and a generator. The learning rule of GAN… GAN is a term of art for “generative adversarial training” thus the learning rule which aims to optimize or enhance the accuracy is learning via adversarial training of both the discriminator and generator.) Claim 12 Kobayashi teaches claim 7 Kobayashi teaches, wherein the temporal sequential encoder comprises a reinforcement learning agent or wherein the temporal sequential encoder comprises diffusion encoders, time-series- based encoders, or transformer encoders. (pg 10 Citation and figure 3 “Figure 3. Structures of the generator, discriminator, and value function networks.” pg 9 Algorithm 1 “ PNG media_image1.png 204 928 media_image1.png Greyscale ” the policy gradient algorithm, a reinforcement learning process, which corresponds to the reinforcement learning agent) Claim 13 Kobayashi teaches claim 12 Kobayashi teaches, the reinforcement learning agent is configured to receive rewards and states based upon the time-series data (pg 5 Section 4 “This section presents a detailed formulation of imitation learning based on GAIL. First, we introduce the existing models, GAIL, InfoGAIL, and AIRL as components of the proposed model” “ Reward estimation based on an adversarial training: GAIL and AIRL… the synthesis problem of IRL and RL can be written as the following optimization problem … and ρπ(s,a) is the joint distribution of state s and action a under policy π.” Pg 6 “To represent reward functions, AIRL employs a special structure for the discriminator corresponding to an odds ratio between the policy and the exponential reward” Here as shown the reward estimation is based on the states and actions which is based on the time series data.) and the reinforcement learning agent is configured to produce an action that identifies how close the time-series data is to the associated complex events. (pg 7 “where f is arbitrary function of the state s, the action a, and the kind of task c. If the above problems converge… π∗(a|s,c) is the optimal policy, and V ∗(s,c) and Q∗(s,a,c) are the optimal value and action value function” the system through training identifies the optimal action to produce according to the function associated with the task c which is the complex event. The action characterizes the action which is close in value to the events.) Claim 20 Kobayashi teaches, A computing device, comprising: a processor; and a storage resource storing computer-readable instructions which, when executed by the processor, cause the processor to: ( pg 13 “The second experiment was conducted using a robot-arm simulator. In this experiment, we examined whether the proposed method can learn to imitate robot-arm reaching behavior in a continuous space. We used the Reacher-v2 environment provided by the OpenAI Gym platform using the MuJoCo physical simulator” the system is implemented in OpenAI Gym platform which requires a processor a storage as claimed.) obtain temporal data relating to a system from a first source; obtain complex events that can affect the system from a second source; (pg 9-10 “The generator network had five input nodes: two nodes corresponding to the agent’s state and three nodes corresponding to the task variable c, and four output nodes corresponding to the agent’s action” pg 4 “At a discrete time t, an agent observes a state st and selects an action at according to the agent’s policy” the generator encodes the two types of inputs (states and actions) which corresponds to values from a first source and events from a second source. The states and actions correspond to sampled data at discrete times, this temporal data.) train a model iteratively using generative networks that correlate the temporal data from the first source and the complex events from the second source; (pg 9-10 “The generator network had five input nodes: two nodes corresponding to the agent’s state and three nodes corresponding to the task variable c, and four output nodes corresponding to the agent’s action” Pg 5 “GAIL considers this optimization problem as the learning of a discriminator and a generator. The learning rule of GAN can then be applied where w and θ are the discriminator and generator parameters” the parameters of the generative network is iteratively updated via learning which correlates the states and actions.) and employ a temporal sequential encoder to control predictions for future temporal data utilizing the trained model. (Pg 5 “GAIL considers this optimization problem as the learning of a discriminator and a generator. The learning rule of GAN can then be applied where w and θ are the discriminator and generator parameters…The discriminator learns to correctly identify whether the distribution that generated the state–action pair is a generator or an expert” the discriminator identifies the distributions which is reasoning about the noisy observations associated with the generated predictions. Because the discriminator and generator is learned jointly the discriminator in part controls generation of future predictions.) Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4-6 are rejected under 35 U.S.C. § 103 as being unpatentable over Kobayashi further in view of Planas “Towards a model-driven approach for multi-experience AI-based user interfaces Claim 4 Kobayashi teaches claim 1 Kobayashi does not explicitly teach, presenting the generated predictions on a user interface. Planas when addressing AI based user interfaces teaches, presenting the generated predictions on a user interface. (pg 3 figure 2 PNG media_image4.png 647 446 media_image4.png Greyscale the chat bot presents the generated predictions from the AI system to the user) Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the AI prediction system of Kobayashi to comprise a user interface for receiving user responsive predictions of Planas. One would have been motivated to make such a combination because both Kobayashi and Planas describe predictions generated by AI systems. Further, Planas notes that conversational user interfaces “are not isolated components. Instead, they are a core element of the software system that embeds them… CUIs must interact with the other inter faces of the system and have access to its functionality and resources… this paper has presented a model-based approach for CUIs covering both the design of each individual interface and the discussion of how such design could be combined with other software models for a complete software generation process” (conclusion 7) Claim 5 Kobayashi/Planas teaches claim 4 Planas teaches, the presenting is performed responsive to a query received from a user (pg 3 figure 2 the chat bot presents the generated predictions from the AI system to the user on an interface responsive to a query) Kobayashi/Planas are combined for the reasons provided in the rejection of claim 4 Claim 6 Kobayashi/Planas teaches claim 5 Planas teaches, wherein the presenting comprises a quantitative graph and/or a natural language answer to the query from the user. (pg 3 figure 2 the chat bot presents the generated predictions (i.e text, corresponding to natural language answer to the user query) from the AI system to the user on an interface responsive to a query) Kobayashi/Planas are combined for the reasons provided in the rejection of claim 4 Allowable Subject Matter Claims 14-19 are objected to as being dependent upon a rejected base claim but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Examiner notes that this is contingent on the remaining non prior art rejections being resolved. Specifically, none of the reference of record either alone or in combination fairly disclose or suggest the limitations of claim 14 Conclusion Prior art: The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Salh et al. “Refiner GAN Algorithmically Enabled Deep-RL for Guaranteed Traffic Packets in Real-Time URLLC B5G Communication Systems” describes a reinforcement learning framework in combination with a GAN network to generate synthetic data to refine the deep RL system. Zhan et al “Human-Guided Robot Behavior Learning: A GAN-Assisted Preference-Based Reinforcement Learning Approach” describes human supervision to guide GAN networks toward learning complex behaviors. Shen et al. “Learning to Generate Visual Questions with Noisy Supervision” describes a gan network guided by noisy supervision. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNATHAN R GERMICK whose telephone number is (571)272-8363. The examiner can normally be reached M-F 9:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.R.G./ Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

May 03, 2023
Application Filed
Feb 24, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566962
DITHERED QUANTIZATION OF PARAMETERS DURING TRAINING WITH A MACHINE LEARNING TOOL
2y 5m to grant Granted Mar 03, 2026
Patent 12566983
MACHINE LEARNING CLASSIFIERS PREDICTION CONFIDENCE AND EXPLANATION
2y 5m to grant Granted Mar 03, 2026
Patent 12554977
DEEP NEURAL NETWORK FOR MATCHING ENTITIES IN SEMI-STRUCTURED DATA
2y 5m to grant Granted Feb 17, 2026
Patent 12443829
NEURAL NETWORK PROCESSING METHOD AND APPARATUS BASED ON NESTED BIT REPRESENTATION
2y 5m to grant Granted Oct 14, 2025
Patent 12443868
QUANTUM ERROR MITIGATION USING HARDWARE-FRIENDLY PROBABILISTIC ERROR CORRECTION
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
47%
Grant Probability
79%
With Interview (+32.1%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 91 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month