Prosecution Insights
Last updated: April 19, 2026
Application No. 18/460,478

META TEMPORAL POINT PROCESSES

Non-Final OA §101§103
Filed
Sep 01, 2023
Examiner
CHOI, YUK TING
Art Unit
2164
Tech Center
2100 — Computer Architecture & Software
Assignee
Royal Bank Of Canada
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
466 granted / 652 resolved
+16.5% vs TC avg
Strong +37% interview lift
Without
With
+37.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
681
Total Applications
across all art units

Statute-Specific Performance

§101
16.8%
-23.2% vs TC avg
§103
55.0%
+15.0% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 652 resolved cases

Office Action

§101 §103
CTNF 18/460,478 CTNF 83032 DETAILED ACTION Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia 1. The present application 18/460,478, filed on 03/11/2024, is being examined under the first inventor to file provisions of the AIA. Drawings 2. The drawings received on 09/01/2023 are accepted by the Examiner. Information Disclosure Statement 3. The information disclosure statement (IDS) submitted on 03/11/2024 is being considered by the examiner. Claim Rejections - 35 USC § 101 07-04-01 AIA 07-04 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 4. Claims 1-34 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claims 1-34 are rejected under 35 U.S.C 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. Claim 1 is directed to an abstract idea of a machine learning method for prediction for a tempol point process, as explained in detail below. The claim does not include elements that are sufficient to amount to significantly more than the judicial exception because the elements can be concepts performed in the human mind which do not add meaningful limits to practicing the abstract idea. Claim 1 recites a machine learning method comprising at least in part: receiving, by at least one trained encoder, an event series comprising a plurality of discrete event times ( e.g., observing an event series comprising discrete event times can be performed in the human mind using a computer as a tool) ; outputting, by at least one trained encoder, an encode history of context feature wherein the encoded history is derived from the event times; and the encoded history is restricted to a local history window of size k, where the local history window excludes those of the event ( e.g., observing and evaluating context features with respect to the event series using a window size can be performed in the human mind using the computer as a tool) ; generating a global feature G from the encoded history, wherein generating the global feature G is performing using a subset of the encoded history that excludes a most recent one of the context features ( e.g., generating a feature G from the previous observed and evaluated context features without a most recent context features can be performed in the human mind using the computer as a tool); providing, to a trained decoder: a representation of the global feature G; and the most recent one of the context features ( e .g., inputting a sample of the feature G and the most recent context features can be performed in the human mind using a computer as a tool ); outputting by the trained decoder, a prediction for a time of a next event wherein the prediction is derived from at least the representation of the global feature G and the most recent one of the context features ( e.g., outputting a prediction for a time of a next event from the feature G and the most recent context features can be performed in the human mind using the computer as a tool) ; Claim 1, as it is recited, falls within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in the human mind, including observation, evaluation, judgement, and opinion, using a computer as a tool. That is, other than reciting a computer-implemented machine learning method comprising at least a trained encoder and a trained decoder to learn event series comprises a plurality of discrete times, nothing in the claim precludes the step from practically being performed in the mind. The formulating and the learning features in the claim are recited at a high level of generality and add no more to the claimed invention than a computer to perform an abstract idea. The additional feature merely uses a computer/device as a tool to learn data after a series of data gathering step is insignificant extra-solution activity, thus, the judicial exception is not integrated into a practical application. The additional feature does not appear to be improvements to the functioning of a computer or to any other technology or technical field. The additional feature does not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, claim 1 is not patent eligible. Claims 2, 3, 5 and 11 recite similar features as claim 1, are fall within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in human mind including observation, evaluation, judgement, opinion. Claims 2, 3, 5 and 11 further recite the feature G has a global latent variable z and a permutation-invariant operation incorporating all members of the subset of the encoded history. There are no additional features turning the judicial exception into a practical application. The claimed features do not appear to be improvements to the functioning of a computer or to any other technology or technical field. Therefore, claims 2, 3, 5 and 11 are not patent eligible. Claims 4, 6-8, 9, 10, 12 recite similar features as claim 1, are fall within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in human mind including observation, evaluation, judgement, opinion. Claims 4, 6-8, 9, 10 and 12 recite providing attention features to a trained decoder and a trained encoder, wherein a first encoder and a second encoder can be duplicate encoders, different encoders and/or at least some model parameters. The additional feature merely uses a computer/device as a tool to transform data after a series of data gathering step is insignificant extra-solution activity, thus, the judicial exception is not integrated into a practical application. The additional feature does not appear to be improvements to the functioning of a computer or to any other technology or technical field. The additional feature does not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, claims 4, 6-8, 9, 10 and 12 are not patent eligible. Claim 13 recites a data processing system to implement a machine learning method for prediction for a temporal point process comprising at least in part: receiving, by at least one trained encoder, an event series comprising a plurality of discrete event times ( e.g., observing an event series comprising discrete event times can be performed in the human mind using a computer as a tool) ; outputting, by at least one trained encoder, an encode history of context feature wherein the encoded history is derived from the event times; and the encoded history is restricted to a local history window of size k, where the local history window excludes those of the event that are more than k events ago ( e.g., observing and evaluating context features with respect to the event series using a window size can be performed in the human mind using the computer as a tool) ; generating a global feature G from the encoded history, wherein generating the global feature G is performing using a subset of the encoded history that excludes a most recent one of the context features ( e.g., generating a feature G from the previous observed and evaluated context features without a most recent context features can be performed in the human mind using the computer as a tool); providing, to a trained decoder: a representation of the global feature G; and the most recent one of the context features ( e.g., inputting a sample of the feature G and the most recent context features can be performed in the human mind using a computer as a tool ); outputting by the trained decoder, a prediction for a time of a next event wherein the prediction is derived from at least the representation of the global feature G and the most recent one of the context features ( e .g., outputting a prediction for a time of a next event from the feature G and the most recent context features can be performed in the human mind using the computer as a tool) ; Claim 13, as it is recited, falls within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in the human mind, including observation, evaluation, judgement, and opinion, using a computer as a tool. That is, other than reciting a computer-implemented machine learning method comprising at least a trained encoder and a trained decoder to learn event series comprises a plurality of discrete times, nothing in the claim precludes the step from practically being performed in the mind. The formulating and the learning features in the claim are recited at a high level of generality and add no more to the claimed invention than a computer to perform an abstract idea. The additional feature merely uses a computer/device as a tool to learn data after a series of data gathering step is insignificant extra-solution activity, thus, the judicial exception is not integrated into a practical application. The additional feature does not appear to be improvements to the functioning of a computer or to any other technology or technical field. The additional feature does not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, claim 13 is not patent eligible. Claims 14, 15, 17 and 23 recite similar features as claim 13, are also fall within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in human mind including observation, evaluation, judgement, opinion. Claims 14, 15, 17 and 23 further recite the feature G has a global latent variable z and a permutation-invariant operation incorporating all members of the subset of the encoded history. There are no additional features turning the judicial exception into a practical application. The claimed features do not appear to be improvements to the functioning of a computer or to any other technology or technical field. Therefore, claims 14, 15, 17 and 23 are not patent eligible. Claims 16 and 18-22 recite similar features as claim 13, are also fall within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in human mind including observation, evaluation, judgement, opinion. Claims 16 and 18-22 recite providing attention features to a trained decoder and a trained encoder, wherein a first encoder and a second encoder can be duplicate encoders, different encoders and/or at least some model parameters. The additional feature merely uses a computer/device as a tool to transform data after a series of data gathering step is insignificant extra-solution activity, thus, the judicial exception is not integrated into a practical application. The additional feature does not appear to be improvements to the functioning of a computer or to any other technology or technical field. The additional feature does not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, claims 16 and 18-22 are not patent eligible. Claim 24 recites a data processing system to implement a machine learning method for prediction for a temporal point process comprising at least in part: receiving, by at least one trained encoder, an event series comprising a plurality of discrete event times ( e.g., observing an event series comprising discrete event times can be performed in the human mind using a computer as a tool) ; outputting, by at least one trained encoder, an encode history of context feature wherein the encoded history is derived from the event times; and the encoded history is restricted to a local history window of size k, where the local history window excludes those of the event that are more than k events ago ( e.g., observing and evaluating context features with respect to the event series using a window size can be performed in the human mind using the computer as a tool) ; generating a global feature G from the encoded history, wherein generating the global feature G is performing using a subset of the encoded history that excludes a most recent one of the context features ( e.g., generating a feature G from the previous observed and evaluated context features without a most recent context features can be performed in the human mind using the computer as a tool); providing, to a trained decoder: a representation of the global feature G; and the most recent one of the context features ( e.g., inputting a sample of the feature G and the most recent context features can be performed in the human mind using a computer as a tool ); outputting by the trained decoder, a prediction for a time of a next event wherein the prediction is derived from at least the representation of the global feature G and the most recent one of the context features ( e .g., outputting a prediction for a time of a next event from the feature G and the most recent context features can be performed in the human mind using the computer as a tool) ; Claim 24, as it is recited, falls within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in the human mind, including observation, evaluation, judgement, and opinion, using a computer as a tool. That is, other than reciting a computer-implemented machine learning method comprising at least a trained encoder and a trained decoder to learn event series comprises a plurality of discrete times, nothing in the claim precludes the step from practically being performed in the mind. The formulating and the learning features in the claim are recited at a high level of generality and add no more to the claimed invention than a computer to perform an abstract idea. The additional feature merely uses a computer/device as a tool to learn data after a series of data gathering step is insignificant extra-solution activity, thus, the judicial exception is not integrated into a practical application. The additional feature does not appear to be improvements to the functioning of a computer or to any other technology or technical field. The additional feature does not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, claim 24 is not patent eligible. Claims 25, 26, 28 and 34 recite similar features as claim 24, are also fall within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in human mind including observation, evaluation, judgement, opinion. Claims 25, 26, 28 and 34 further recite the feature G has a global latent variable z and a permutation-invariant operation incorporating all members of the subset of the encoded history. There are no additional features turning the judicial exception into a practical application. The claimed features do not appear to be improvements to the functioning of a computer or to any other technology or technical field. Therefore, claims 25, 26, 28 and 34 are not patent eligible. Claims 27 and 29-33 recite similar features as claim 24, are also fall within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in human mind including observation, evaluation, judgement, opinion. Claims 27 and 29-33 recite providing attention features to a trained decoder and a trained encoder, wherein a first encoder and a second encoder can be duplicate encoders, different encoders and/or at least some model parameters. The additional feature merely uses a computer/device as a tool to transform data after a series of data gathering step is insignificant extra-solution activity, thus, the judicial exception is not integrated into a practical application. The additional feature does not appear to be improvements to the functioning of a computer or to any other technology or technical field. The additional feature does not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, claims 27 and 29-33 are not patent eligible. Claim Rejections - 35 USC § 103 07-06 AIA 15-10-15 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-21-aia AIA Claim s 1-3, 12-15 and 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over Mehrasa et al. (US 2020/0160176 A1), hereinafter Mehrasa and in view of Villegas et al. (US 2020/0082248 A1), hereinafter Villegas . Referring to claims 1, 13 and 24 , Mehrasa discloses a computer-implemented machine learning method for prediction for a temporal point process ( See para. [0062], predicting a future event using a temporal point process, which is characterized by a conditional intensity function ), the method comprising: receiving, by at least one trained encoder ( See para [0140] and Figure 6, receiving data representing irregularly spaced actions including past actions and a current action for training a variational auto-encoder model ), an event series comprising a plurality of discrete event times τ ₁ , τ ₂ τᵢ ( See para. [0004], para. [0005], para. [0144] and Figure 6, step 430 modeling event data in continuous time using the temporal point process ); outputting, by the at least one trained encoder, an encoded history of context features r ₁ , r ₂ r ₁ , wherein: the encoded history is derived from the event times ( See para. [0140]-para. [0146], the past actions [e.g., context features] are encoded into a vector representation, for example, prior LSTM 114A performs long short-term memory on the concatenated vector representation x.sub.n.sup.emb for past actions ) […]; generating a global feature G from the encoded history, wherein generating the global feature G is performed using a subset r ₁ , r ₂ rᵢ ₋₁ of the encoded history that excludes a most recent one r ₁ of the context features ( See para. [0140]-para. [0146], the past actions [e.g., context features] are encoded into a vector representation [e.g. global feature G], note in para. [0147], the current action [e.g. the most recent action] is not encoded in the vector representation 450A, the current action is encoded into a vector representation 450B for the current action ); providing, to a trained decoder ( See para. [0151]-para. [0152], providing to an action decoder ): a representation of the global feature G ( See para. [0140]-para. [0146] and Figure 6, the past actions [e.g., context features] are encoded into a vector representation [e.g. global feature G] in step 460A); and the most recent one r ₗ of the context features r ₁ , r ₂ r ( See para. [0140]- para. [0147] and Figure 6, the current action is obtained in step 460B ); outputting by the trained decoder, a prediction for a time τ ₗ₊₁ of a next event, wherein the prediction is derived from at least the representation of the global feature G and the most recent one r ₗ of the context features r ₁ , r ₂ r ₁ ( See para. [0148] – para. [0158] and Figure 6, the action decoder 118A generates a probability distribution over action categories for the current action, the time decoder generates a probability distribution over inter-arrival time for the current action, the system predicts probabilities of action categories and probabilities of inter-arrival times of a next action using the trained model). Mehrasa does not explicitly the encoded object is restricted to a local window of size k, where the local window excludes those of event times that are more than k events ago. Villegas discloses the encoded object is restricted to a local window of size k, where the local window excludes those of event times that are more than k events ago ( See para. [0037], the encoder generates encoded state feature corresponding to the object, the encoded object can be tracked over a predefined number of time steps, the oldest object is removed from consideration, the state information is stored in a rolling buffer such that, at each time instance, an oldest state information is removed from the buffer and state information corresponding to a current state minus one time step is added to the buffer ). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify the size of the encoded history of Mehrasa to be restricted to a buffer size k, where the buffer size excludes the oldest information, as taught by Villegas. Skilled artisan would have been motivated to provide a more accurate representation of each of the objects, and help to predict the next or future movements of the objects as they move through the environment (See Villegas, para. [0038]). In addition, both references (Mehrasa and Villegas) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as using future object trajectory predictions. This close relation between both references highly suggests an expectation of success. As to claims 2, 14 and 25 , Mehrasa discloses wherein the representation of the global feature G is the global feature G itself ( See para. [0140]-para. [0146] and Figure 6, the past actions [e.g., context features] are encoded into a vector representation [e.g. global feature G] in step 460A). As to claims 3, 15 and 26, Mehrasa discloses wherein the representation of the global feature G is a global latent variable Z for the global feature G ( See para . [0011], determining a conditional posterior distribution for the current action, based at least in part on the current vector representation; sampling a latent variable from the conditional prior distribution ). As to claim 12 , Mehrasa discloses prior to receiving the event series at the trained encoder: building the at least one trained encoder; and building the trained decoder ( See para. [0056] and para. [0092], a variational auto-encoder (VAE) [13] describes a generative process with simple prior p.sub.θ(z) [usually chosen to be a multivariate Gaussian] and complex likelihood p.sub.θ(x|z) [the parameters of which are produced by neural networks], a latent variable z.sub.n may be sampled from the posterior [or prior during testing] distribution, and is fed to decoder networks action decoder 118A and time decoder 1186 for generating distributions over action category a.sub.n and inter-arrival time τ.sub.n ]) . 07-21-aia AIA Claim s 4-9, 16-21 and 27-32 are rejected under 35 U.S.C. 103 as being unpatentable over Mehrasa (US 2020/0160176 A1) and in view of Villegas (US 2020/0082248 A1) and further in view of Condessa (US 2024/0201668 A1) . As to claims 4, 16 and 27 , Mehrasa does not explicitly disclose applying cross-attention to the subset of the encoded history to generate an attention feature. Condessa discloses applying cross-attention to the subset r ₁ , r ₂ rᵢ ₋₁ of the encoded history to generate an attention feature ri; and providing the attention feature ri to the trained decoder; wherein the prediction is further derived from the attention feature ( See para. [0029] and para. [0030], the machine learning system 140 includes at least (i) a transformer encoder 300 and (ii) a transformer decoder 302. The encoder 300 is applied to the history embedding sequence 308 to produce intermediate history features 312. The encoder 300 uses a linear network followed by multiple layers of causal transformer blocks to apply self-attention to the history embedding sequence 308. In the decoder network, the input embedding sequence 310 and the intermediate history features 312 are combined using a cross-attention mechanism. The input embedding sequence 310 goes through a series of causal self-attention layers to produce the queries for the cross-attention layer. At the same time, the intermediate history features 312 are fed to the cross-attention layer as both keys and values. The output of the cross-attention layer goes through further processing, such as getting combined with the residual connection and passing through a fully-connected network to convert to the final output [i.e., the predicted measurement data] ). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify the training model of Mehrasa to apply cross-attention to the subset of the encoded history to generate an attention feature, as taught by Condessa. Skilled artisan would have been motivated to provide a relatively accurate prediction or estimation of the target data (See Condessa, para. [0019]). In addition, all references (Condessa, Mehrasa and Villegas) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as using future object trajectory predictions. This close relation between both references highly suggests an expectation of success. As to claims 5, 17 and 28 , Mehrasa discloses wherein the representation of the global feature G is a global latent variable Z for the global feature G ( See para . [0011], determining a conditional posterior distribution for the current action, based at least in part on the current vector representation; sampling a latent variable from the conditional prior distribution ). As to claims 6, 18 and 29 , Mehrasa discloses wherein the at least one encoder is a single encoder ( See para. [0140], training a variational auto-encoder ). As to claims 7, 19 and 30 , Mehrasa discloses wherein the at least one encoder is a first encoder and a second encoder ( See para. [0082] and Figure 2, the previous and current actions and their inter-arrival times (i.e., x.sub.n−1 and x.sub.n) are embedded by separate embedders 113 into respective vector representations ). As to claims 8 , 20 and 31 , Mehrasa discloses wherein the first encoder and the second encoder are different encoders ( See para. [0082] and Figure 2, the previous and current actions and their inter-arrival times (i.e., x.sub.n−1 and x.sub.n) are embedded by separate embedders 113 into respective vector representations ). As to claims 9 , 21 and 32 , Mehrasa discloses wherein the first encoder and the second encoder share at least some model parameters ( See para. [0082] and Figure 2, the previous and current actions and their inter-arrival times (i.e., x.sub.n−1 and x.sub.n) are embedded by separate embedders 113 into respective vector representations ) . 07-21-aia AIA Claim s 10, 22 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Mehrasa (US 2020/0160176 A1) and in view of Villegas (US 2020/0082248 A1) and Condessa (US 2024/0201668 A1) and further in view of Ogawa (US 2016/0344429 A1) . As to claims 10 , 22 and 33 , Mehrasa does not explicitly wherein the first encoder and the second encoder are duplicate encoders. Ogawa discloses wherein the first encoder and the second encoder are duplicate encoders ( See para. [0076] it is common practice to duplicate the encoder 40 itself to eliminate or minimize resulting increases in cost ). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify the encoders of Mehrasa to include duplicate encoders, as taught by Ogawa. Skilled artisan would have been motivated to provide a relatively accurate prediction or estimation of the target data (See Ogawa, para. [0076]). In addition, all references (Condessa, Ogawa, Mehrasa and Villegas) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as encoding information data. This close relation between both references highly suggests an expectation of success . 07-21-aia AIA Claim s 11, 23 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Mehrasa (US 2020/0160176 A1) and in view of Villegas (US 2020/0082248 A1) and further in view of Page (US 2020/0090357 A1) . As to claims 11, 23 and 34, Mehrasa does not explicitly disclose the global feature G is a permutation-invariant operation incorporating all members. Page discloses wherein the global feature G is a permutation-invariant operation incorporating all members ( See para. [0050], the system uses permutation invariant operations (maximum operation) which can effectively capture global features ). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify the system to include a permutation-invariant operation, as taught by Page. Skilled artisan would have been motivated to capture global features effectively (See Page, para. [0050]). In addition, all references (Page, Mehrasa and Villegas) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as encoding information data. This close relation between both references highly suggests an expectation of success . Conclusion 07-96 AIA The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Elbsat et al. (US 2020/0356087 A1) discloses a model predictive maintenance (MPM) system for building equipment includes one or more processing circuits including one or more processors and memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations. The operations include obtaining one or more performance indicators for the building equipment and determining whether a trigger condition has been satisfied based on the one or more performance indicators. The operations include triggering a model predictive maintenance process to generate a maintenance schedule for the building equipment in response to determining that the trigger condition has been satisfied. The operations include initiating a maintenance activity for the building equipment in accordance with the maintenance schedule. Xiao et al. (US 2025/0085115 A1) discloses a computer-implemented method of trajectory prediction includes obtaining a first cross-attention between a vectorized representation of a road map near a vehicle and information obtained from a rasterized representation of an environment near the vehicle by processing through a first cross-attention stage; obtaining a second cross-attention between a vectorized representation of a vehicle history and information obtained from the rasterized representation by processing through a second cross-attention stage; operating a scene encoder on the first cross-attention and the second cross-attention; operating a trajectory decoder on an output of the scene encoder; obtaining one or more trajectory predictions by performing one or more queries on the trajectory decoder. Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUK TING CHOI whose telephone number is (571) 270-1637. The examiner can normally be reached Monday-Friday 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AMY NG can be reached at 5712701698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YUK TING CHOI/Primary Examiner, Art Unit 2164 Application/Control Number: 18/460,478 Page 2 Art Unit: 2164 Application/Control Number: 18/460,478 Page 3 Art Unit: 2164 Application/Control Number: 18/460,478 Page 4 Art Unit: 2164 Application/Control Number: 18/460,478 Page 5 Art Unit: 2164 Application/Control Number: 18/460,478 Page 6 Art Unit: 2164 Application/Control Number: 18/460,478 Page 7 Art Unit: 2164 Application/Control Number: 18/460,478 Page 8 Art Unit: 2164 Application/Control Number: 18/460,478 Page 9 Art Unit: 2164 Application/Control Number: 18/460,478 Page 10 Art Unit: 2164 Application/Control Number: 18/460,478 Page 11 Art Unit: 2164 Application/Control Number: 18/460,478 Page 12 Art Unit: 2164 Application/Control Number: 18/460,478 Page 13 Art Unit: 2164 Application/Control Number: 18/460,478 Page 14 Art Unit: 2164 Application/Control Number: 18/460,478 Page 15 Art Unit: 2164 Application/Control Number: 18/460,478 Page 16 Art Unit: 2164 Application/Control Number: 18/460,478 Page 17 Art Unit: 2164 Application/Control Number: 18/460,478 Page 18 Art Unit: 2164 Application/Control Number: 18/460,478 Page 19 Art Unit: 2164
Read full office action

Prosecution Timeline

Sep 01, 2023
Application Filed
Mar 17, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591610
SYSTEMS AND METHODS FOR REMOVING NON-CONFORMING WEB TEXT
2y 5m to grant Granted Mar 31, 2026
Patent 12579156
SYSTEMS AND METHODS FOR VISUALIZING ONE OR MORE DATASETS
2y 5m to grant Granted Mar 17, 2026
Patent 12562753
SYSTEM AND METHOD FOR MULTI-TYPE DATA COMPRESSION OR DECOMPRESSION WITH A VIRTUAL MANAGEMENT LAYER
2y 5m to grant Granted Feb 24, 2026
Patent 12536282
METHODS AND APPARATUS FOR MACHINE LEARNING BASED MALWARE DETECTION AND VISUALIZATION WITH RAW BYTES
2y 5m to grant Granted Jan 27, 2026
Patent 12511258
DYNAMIC STORAGE OF SEQUENCING DATA FILES
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+37.4%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 652 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month