Prosecution Insights
Last updated: April 19, 2026
Application No. 18/918,077

REAL-TIME TIME SERIES FORECASTING USING A COMPOUND LARGE CODEWORD MODEL

Non-Final OA §101§102§103§112
Filed
Oct 17, 2024
Examiner
GODO, MORIAM MOSUNMOLA
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
AtomBeam Technologies Inc.
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
4y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
30 granted / 68 resolved
-10.9% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
47 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION 1. This office action is in response to the Application No. 18918077 filed on 12/18/2025. Claims 6 and 12 has been cancelled. Claims 1-5 and 7-11 are presented for examination and are currently pending. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 3. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 12/18/2025 has been entered. Response to Arguments 4. On pages 4-5 of the remarks, the Applicant argued that “The amended claims recite that "the codewords and their corresponding codebooks are adaptively updated to reflect incoming data inputs." As described at [0379]-[0406] of the specification, this adaptive updating comprises specific technical operations: A data analyzer identifies significant changes or emerging patterns in incoming data ([0380]), A frequency analyzer monitors usage patterns of existing codewords to identify which remain relevant and which have become obsolete ([0381]), A codeword updater generates new codewords for emerging patterns not adequately captured by existing entries, modifies existing codewords to reflect evolving dynamics, and prunes outdated or rarely used codewords ([0382]-[0383])”. On pages 5 of the remarks, the Applicant argued that “These are not abstract concepts; they are concrete computational operations that maintain alignment between the codebook and actual data characteristics. The specification explicitly states that this "enables the compound LCM to maintain high predictive accuracy even as market conditions change" ([0384])”. The above arguments are not persuasive because the specific technical operations that includes data analyzer that identifies changes in data, the frequency analyzer that monitors usage patterns , the codeword updater that generates new codewords for emerging patterns and the pruning of outdated codewords are not claimed. As a result, the alignment between codebook and actual data argued above cannot be achieved. On pages 5 of the remarks, the Applicant argued that “The amended claims further recite generating "predicted future values based on patterns identified in historical data." This ties the adaptive representation system to a concrete technical output: the system identifies patterns in historical data within the fused codeword representations and uses those patterns to predict values that have not yet occurred”. The above arguments are not persuasive because the limitation “predicted future values based on patterns identified in historical data” is broader than the applicant’s argument of a system identifying patterns in the historical data and then using those identified patterns to predict. The claim language as it currently stands does not require the active identification of patterns step (i.e. the future values based on patterns identified could be based off just a list of patterns not patterns the system itself identified). Furthermore, on page 5, the Applicant argued that “The adaptive codebook updating recited in the amended claims requires continuous computational analysis across three subsystems (data analyzer, frequency analyzer, codeword updater) operating on streaming data. This is not an abstract concept of "updating" but a specific technical mechanism for maintaining representation fidelity”. The above argument is not persuasive because the three subsystem: data analyzer, frequency analyzer, codeword updater are not claimed. As a result, the mechanism for maintaining representation fidelity argued above cannot be achieved. On pages 5 of the remarks, the Applicant argued that “The August 4, 2025 Memorandum to Technology Centers 2100, 2600, and 3600 instructs that examiners must evaluate the claim as a whole and consider whether it reflects a technological solution to a technological problem. The memorandum cautions against expanding the mental process grouping to cover claim limitations that cannot practically be performed in the human mind”. On pages 5 of the remarks, the Applicant argued that “The amended claims recite a system that simultaneously monitors incoming data for emerging patterns, tracks usage frequency of existing codewords, and dynamically generates, modifies, or prunes codewords in response, all while maintaining operation for real-time forecasting. This coordinated, continuous operation across multiple analytical subsystems reflects a technological solution to the technical problem of representation drift”. The above argument is not persuasive because the simultaneously monitoring of incoming data for emerging patterns, the tracking of usage frequency of existing codewords, dynamically generating pruning of codewords are not claimed. As a result, there is no technological solution to the technical problem of representation drift. Furthermore, the detailed 101 analysis in this office action complies with the August 4, 2025 Memorandum. On pages 6 of the remarks, the Applicant argued that “Even assuming the claims recite an abstract idea, the amended limitations integrate any such idea into a practical application. The coordinated operation of these subsystems cannot practically be performed in the human mind. Each step requires continuous numerical monitoring of high- dimensional codeword distributions and real-time statistical adaptation, which falls squarely outside the mental process grouping under the 2025 guidance”. The above argument is not persuasive because the subsystems argued above are not reflected in the claims. The step of continuous numerical monitoring is not claimed in the invention. As a result, the limitations of the invention does not integrate the abstract ideas to practical application. Further on page 6, the Applicant argued that “Here, adaptive codebook updating improves how the forecasting system itself operates. Rather than degrading as data patterns shift, the system maintains its internal representation accuracy through continuous codebook refinement. This is an improvement to system functioning analogous to Enfish's self-referential table, not merely using a computer to perform an abstract task. The adaptive updating does not merely improve data quality; it modifies the internal structure on which the machine learning core operates, thereby improving the functioning of the system itself. The above argument is not persuasive because the updating of the codebook is mere instructions to apply an exception that does not integrate the abstract ideas into practical application. This does not amount to significantly more than an exception. According to MPEP 2106.04(d)(1), “Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification”. Here, there are no details in the claim as to how updating the codebook leads to the argued improvements. On pages 6 of the remarks, the Applicant argued that “The technical problem here, representation drift in codebook-based systems processing evolving data streams, is specific to this technological context. The solution, coordinated operation of data analysis, frequency analysis, and codeword updating subsystems, is a specific technical response to that problem”. The above argument is not persuasive because the coordinated operation of data analysis, the frequency analysis, and the codeword updating subsystems are not claimed. As a result, the solution to the problem stated above cannot be realized. According to MPEP 2106.04(d)(1), “Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification”. Here, there are no details in the claim as to how the coordinated operation of data analysis, frequency analysis and updating the codebook, leads to the argued improvements. On pages 6 of the remarks, the Applicant argued that “The Examiner has not cited factual evidence establishing that the specific combination of adaptive codebook updating (comprising data analysis, frequency analysis, and coordinated codeword generation/modification/pruning) with cross-modal fusion and pattern-based future value prediction is well-understood, routine, or conventional”. On pages 6 of the remarks, the Applicant argued that “In Berkheimer, the Federal Circuit held that whether claim elements are "well-understood, routine, or conventional" is a question of fact requiring evidentiary support. The Examiner has provided no such evidence for the adaptive codebook mechanism recited in the amended claims”. On pages 7 of the remarks, the Applicant argued that “The amended claims recite a specific technical solution (adaptive codebook updating through coordinated data analysis, frequency analysis, and codeword management) to a specific technical problem (representation drift in codebook-based systems). The claims further tie this mechanism to a concrete output: predicted future values based on patterns in historical data. Applicant respectfully requests withdrawal of the §101 rejection”. The above argument is not persuasive because the adaptive codebook updating through coordinated data analysis, frequency analysis, codeword management , adaptive codebook mechanisms, coordinated codeword generation/modification/pruning with cross-modal fusion are not claimed. As a result, the amended claims cannot arrive at the solution argued by the Applicant. Furthermore, According to MPEP 2106.04(d)(1), “Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification”. Here, there are no details in the claim as to how the coordinated operation of data analysis, frequency analysis, codeword management and coordinated codeword generation/modification/pruning with cross-modal fusion leads to the argued improvements. In addition, regarding Berkheimer, Berkheimer evidence is required for the limitations identified as insignificant extra-solution activity and well-understood, routine and conventional. The only element we have identified as that is the “receive a variety of data inputs”. The Examiner did show Berkheimer evidence for this element by identifying the court case in MPEP 2106.05(d)(II), example i. Furthermore, the 101 rejection is maintained and adjusted to reflect the newly added limitations. It is noted that the Applicant’s argument regarding the prior art rejections has been considered but are moot because the amendment necessitated new grounds of rejection presented in this Office action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 5. Claims 1-5, 7-11 are rejected under 35 U.S.C 101 because the claimed invention is directed towards an abstract idea without significantly more. Step 1 Independent claim 1 is directed to a system, and falls into one of the four statutory categories. Step 2A, Prong 1 Claim 1 recites the following abstract ideas: allocate codewords to each data input (Mental process directed to allocating codeword to data. This process can be performed with a pen and paper), wherein codewords are mapped to a corresponding codebook (Mental process directed to mapping codewords to codebook. This process can be performed with the use of pen and paper); fuse codewords of dissimilar data types together into a single codeword representation (Mental process directed to joining data types together into a representation. This process can be performed with a pen and paper); Step 2A, Prong 2 Claim 1 recites the following additional elements: a deep learning system for real-time time series forecasting using a compound large codeword model (this limitation is directed to using a model with time series data as input for a deep learning system. This limitation is recited at a high-level of generic computer software. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(f)), comprising one or more computers with executable instructions that, when executed, cause the deep learning system (this limitation is directed to mere instruction to apply a judicial exception. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(f)) to: receive a variety of data inputs (this limitation is directed to insignificant extra- solution activity of data transmission. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(g), which includes a plurality of data types (this limitation is directed to a particular type or source of data, which is field of use. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)); and wherein the codewords and their corresponding codebooks are adaptively updated to reflect incoming data inputs (this limitation is directed to updating codewords with data inputs. This is mere instruction to apply an exception. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(f)) process the single codeword representation through a machine learning core (this limitation is directed to using machine learning core for processing input (i.e., single codeword representation). This is recited at a high-level generic processing of a computer software. It is a mere instruction to apply a judicial exception. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(f)); generate an output comprising predicted future values based on patterns identified in historical data from the plurality of single codeword representations (this limitation is directed to generating output from a machine learning core. This is recited at a high-level of generality. It is a mere instruction to apply a judicial exception. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(f)). Step 2B Claim 1 recites the following additional elements: a deep learning system for real-time time series forecasting using a compound large codeword model (this limitation is directed to using a model with time series data as input for deep learning. This limitation is recited at a high-level of generic computer software. This does not amount to significantly more than judicial exception. See MPEP 2106.05(f)), comprising one or more computers with executable instructions that, when executed, cause the deep learning system (this limitation is directed to mere instruction to apply a judicial exception. This does not amount to significantly more than judicial exception. See MPEP 2106.05(f)) to: receive a variety of data inputs (this limitation is directed Insignificant extra solution activity of data transmission and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i), which includes a plurality of data types (this limitation is directed to a particular type or source of data, which is field of use. This does not amount to significantly more than judicial exception. See MPEP 2106.05(h)); and wherein the codewords and their corresponding codebooks are adaptively updated to reflect incoming data inputs (this limitation is directed to updating codewords with data inputs. This is mere instruction to apply an exception. This does not amount to significantly more than judicial exception. See MPEP 2106.05(f)) process the single codeword representation through a machine learning core (this limitation is directed to using machine learning core for processing input (single codeword representation). This is recited at a high-level generic processing of a computer software. This does not amount to significantly more than judicial exception. See MPEP 2106.05(f)); generate an output comprising predicted future values based on patterns identified in historical data from the plurality of single codeword representations (this limitation is directed to generating output from a machine learning core. This is recited at a high-level of generality. It is a mere instruction to apply a judicial exception. This does not amount to significantly more than judicial exception. See MPEP 2106.05(f)). 6. Dependent claim 2 is directed to a system, and falls into one of the four statutory categories. Claim 2 do not recite any abstract ideas. Claim 2 recite the following additional elements: wherein the machine learning core uses a transformer based architecture (this limitation is directed to linking the use of a judicial exception to a particular technological environment or field of use. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)). Claim 2 recite the following additional elements: wherein the machine learning core uses a transformer based architecture (this limitation is directed to linking the use of a judicial exception to a particular technological environment or field of use. This does not amount to significantly more than judicial exception. See MPEP 2106.05(h)). 7. Dependent claim 3 is directed to a system, and falls into one of the four statutory categories. Claim 3 do not recite any abstract ideas. Claim 3 recite the following additional elements: wherein the machine learning core uses a latent transformer based architecture (this limitation is directed to linking the use of a judicial exception to a particular technological environment or field of use. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)). Claim 3 recite the following additional elements: wherein the machine learning core uses a latent transformer based architecture (this limitation is directed to linking the use of a judicial exception to a particular technological environment or field of use. This does not amount to significantly more than judicial exception. See MPEP 2106.05(h)). 8. Dependent claim 4 is directed to a system, and falls into one of the four statutory categories. Claim 4 do not recite any abstract ideas. Claim 4 recite the following additional elements: wherein the variety of data inputs include real-time time series data (this limitation is directed to a particular type or source of data, which is field of use. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)). Claim 4 recite the following additional elements: wherein the variety of data inputs include real-time time series data (this limitation is directed to a particular type or source of data, which is field of use. This does not amount to significantly more than judicial exception. See MPEP 2106.05(h)). 9. Dependent claim 5 is directed to a system, and falls into one of the four statutory categories. Claim 5 do not recite any abstract ideas. Claim 5 recite the following additional elements: wherein the machine learning core processes fused codeword representations of the real-time time series data into short-term forecasts for the time series data (this limitation is recited at a high-level of generality of conversion of data. This is a mere instruction to apply an Exception. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(f)). Claim 5 recite the following additional elements: wherein the machine learning core processes fused codeword representations of the real-time time series data into short-term forecasts for the time series data (this limitation is recited at a high-level of generality of conversion of data. This is a mere instruction to apply an Exception. This does not amount to significantly more than judicial exception. See MPEP 2106.05(f)). 10. Independent claim 7 is directed to a method, and falls into one of the four statutory categories. With regards to claim 7, it is substantially similar to claim 1, and is rejected in the same manner and reasoning applying. Claim 7 further recites “processing the single codeword representation through a machine learning core using a latent transformer architecture that operates on latent space vectors without embedding layers or positional encoding layers” this limitation is directed to linking the use of a judicial exception to a particular technological environment or field of use. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(h). 11. Dependent claim 8 is directed to a method, and falls into one of the four statutory categories. With regards to claim 8, it is substantially similar to claim 2, and is rejected in the same manner and reasoning applying. 12. Dependent claim 9 is directed to a method, and falls into one of the four statutory categories. With regards to claim 9, it is substantially similar to claim 3, and is rejected in the same manner and reasoning applying. 13. Dependent claim 10 is directed to a method, and falls into one of the four statutory categories. With regards to claim 10, it is substantially similar to claim 4, and is rejected in the same manner and reasoning applying. 14. Dependent claim 11 is directed to a method, and falls into one of the four statutory categories. With regards to claim 11, it is substantially similar to claim 5, and is rejected in the same manner and reasoning applying. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 15. Claims 1-5 and 7-11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 and 7 recites “ … based on pattern identified in historical data” which lacks antecedent basis. There is no initial recitation of identifying patterns, and the historical data is not previously recited. Claim 1 and 7 recites “receiving variety of data inputs” and “incoming data inputs”. It is unclear if the limitation “receiving a variety of data inputs” is the same as the limitation “incoming data input”. For the purpose of examination, the Examiner has interpreted variety of data inputs and incoming data inputs to be different. Claims 2-5 and 8-11 that are not specifically mentioned are rejected due to dependency. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 16. Claims 1-4 and 7-10 are rejected under 35 U.S.C 102(a)(1) as being anticipated by Harikumar et al. (US20230419551) Regarding claim 1, Harikumar teaches a deep learning system (In some implementations, a CNN is used to predict image tokens [0033]) for real-time time series forecasting (predict the next image tokens autoregressively (one token at a time in serial fashion [0041]) using a compound large codeword model (As used herein, in the context of this disclosure, a “codebook” is a visual dictionary of a set of visual words (or codewords) that represent one or more feature vectors of one or more images [0035]), comprising one or more computers with executable instructions that, when executed (In some aspects, a computer-readable apparatus including a storage medium stores computer-readable and computer-executable instructions that are configured to, when executed by at least one processor apparatus [0102]), cause the deep learning system to: receive a variety of data inputs which includes a plurality of data types (The encoder 404 is configured to receive an input image 402 of various types (sketch image, RGB image, vector image, etc.) [0051]; input image 1102 and the sketch image 1104 [0098]; input image (e.g., 112, 202, 402) [0070]. The Examiner notes that the input image 112 in Fig. 1 is different from input image 202 in Fig. 2 [0070]); allocate codewords to each data input, wherein codewords are mapped to a corresponding codebook (In some embodiments, a set of image tokens each having a unique integer value is obtained based on the codebook 502 a for the image encoder 510 and the codebook 502 b for the sketch encoder 520 [0070], Fig. 5. The Examiner notes a codebook contains codeword), and wherein the codewords and their corresponding codebooks are adaptively updated to reflect incoming data inputs (codebook 502a is adaptively updated by incoming data inputs from 504a-n and codebook 502b is adaptively updated by incoming data inputs from 504b, Fig.5); fuse codewords of dissimilar data types together into a single codeword representation (504 a-n . . . 504 a-x corresponding to one or more features of the input image (e.g., 112, 202, 402) [0070]; the vector 504 a-1 corresponds to image token 506-1, and the vector 504 a-2 corresponds to image token 506-2 … Grouped into the matrix of values 506 are at least a portion of these image tokens 506-n [0071]. The Examiner notes grouped into the matrix of values 506 in Fig. 5 indicates fusing codewords of dissimilar data types together into a single codeword representation); process the single codeword representation through a machine learning core (In some embodiments, image token predictions 602 are fed to the transformer model 604 [0081]); and generate an output comprising predicted future values based on patterns identified in historical data from the plurality of single codeword representations (Each new prediction give a new token. The next prediction takes into account all previously predicted tokens, Fig. 6). Regarding claim 2, Harikumar teaches the system of claim 1, Harikumar teaches wherein the machine learning core uses a transformer based architecture (Transformer models 106 and 604 are examples of the transformer model [0111]). Regarding claim 3, Harikumar teaches the system of claim 1, Harikumar teaches wherein the machine learning core uses a latent transformer based architecture (an unsupervised learning technique that uses a neural network to find non-linear latent representations for a given data distribution [0066]). Regarding claim 4, Harikumar teaches the system of claim 1, Harikumar teaches wherein the variety of data inputs includes real-time time series data (predict the next image tokens autoregressively (one token at a time in serial fashion [0041]; The encoder 404 is configured to receive an input image 402 of various types (sketch image, RGB image, vector image, etc.) [0051]; input image 1102 and the sketch image 1104 [0098]; input image (e.g., 112, 202, 402) [0070]. The Examiner notes that the input image 112 in Fig. 1 is different from input image 202 in Fig. 2 [0070]). Regarding claim 7, claim 7 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further, Harikumar teaches processing the single codeword representation through a machine learning core using a latent transformer architecture (Transformer models 106 and 604 are examples of the transformer model [0111]) that operates on latent space vectors (an unsupervised learning technique that uses a neural network to find non-linear latent representations for a given data distribution [0066]) without embedding layers or positional encoding layers (The transformer model does not have an embedding layer or positional encoding layer); Regarding claim 8, claim 8 is similar to claim 2. It is rejected in the same manner and reasoning applying. Regarding claim 9, claim 9 is similar to claim 3. It is rejected in the same manner and reasoning applying. Regarding claim 10, claim 10 is similar to claim 4. It is rejected in the same manner and reasoning applying. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 17. Claims 5 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Harikumar et al. (US20230419551) in view of Guo et al. ("MSMC-TTS: Multi-stage multi-codebook VQ-VAE based neural TTS." IEEE/ACM Transactions on Audio, Speech, and Language Processing 31 (2023): 1811-1824) Regarding claim 5, Harikumar teaches the system of claim 4, Harikumar does not explicitly teach the limitations of claim 5. Guo teaches wherein the machine learning core processes fused codeword representations of the real-time time series data into short-term forecasts for the time series data (Multi-stage modeling and prediction force the model to pay sufficient attention to short- and long-time contextual information at different time resolutions, pg. 1818, right col., last para.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Harikumar to incorporating the teachings of Guo for the benefits of lowering modeling complexity and data size requirements, preserving excellent performance even with fewer model parameters or training data (Guo, abstract) Regarding claim 11, claim 11 is similar to claim 5. It is rejected in the same manner and reasoning applying. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.G./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Oct 17, 2024
Application Filed
Feb 21, 2025
Non-Final Rejection — §101, §102, §103
Jun 17, 2025
Response Filed
Sep 15, 2025
Final Rejection — §101, §102, §103
Dec 18, 2025
Request for Continued Examination
Jan 07, 2026
Response after Non-Final Action
Mar 08, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602586
SUPERVISORY NEURON FOR CONTINUOUSLY ADAPTIVE NEURAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12530583
VOLUME PRESERVING ARTIFICIAL NEURAL NETWORK AND SYSTEM AND METHOD FOR BUILDING A VOLUME PRESERVING TRAINABLE ARTIFICIAL NEURAL NETWORK
2y 5m to grant Granted Jan 20, 2026
Patent 12511528
NEURAL NETWORK METHOD AND APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12367381
CHAINED NEURAL ENGINE WRITE-BACK ARCHITECTURE
2y 5m to grant Granted Jul 22, 2025
Patent 12314847
TRAINING OF MACHINE READING AND COMPREHENSION SYSTEMS
2y 5m to grant Granted May 27, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
78%
With Interview (+33.4%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month