Prosecution Insights
Last updated: April 19, 2026
Application No. 18/056,391

OBJECT-BASED DATA SCIENCE PLATFORM

Non-Final OA §103
Filed
Nov 17, 2022
Examiner
HALE, BROOKS T
Art Unit
2166
Tech Center
2100 — Computer Architecture & Software
Assignee
Liveline Technologies Inc.
OA Round
1 (Non-Final)
49%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
80%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
36 granted / 74 resolved
-6.4% vs TC avg
Strong +31% interview lift
Without
With
+31.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
37 currently pending
Career history
111
Total Applications
across all art units

Statute-Specific Performance

§101
22.3%
-17.7% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
3.0%
-37.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 74 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-20 are pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-11, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rossi et al (US 20170154282 A1) hereafter Rossi in view of Hwang et al (US 20210263859 A1) hereafter Hwang Regarding claim 1, Rossi teaches a computer system comprising: a memory; and a processor programmed to construct and utilize a plurality of data package objects that each contains signal data describing time-series values for parameters (Para 0007, the system and methods described below allow time series forecasting to be performed on multivariate time series represented as vertices in graphs with arbitrary structures as well as the prediction of a future class label for data points represented by vertices in a graph), organizes the signal data into batches having a size less than the memory (Para 0007, The system and methods are well-suited for processing data in a streaming or online setting and naturally handle training data with skewed or unbalanced class labels), identifies the batches according to indices, and responsive to requests, provides output identifying the indices in randomly shuffled or arbitrary order (Para 0068, The use of the decision tree learner would provide decision tree models, such as random forests or bagged decision trees, which may be used for the prediction by averaging over the results given by the models), loads into the memory one of the batches such that features of the signal data of the one of the batches can be used to train a machine learning model to predict time- series parameter outputs from time-series parameter inputs (Para 0010, predicting the label associated with the incoming data item based on the scores associated with the label at a future point of time). Rossi does not appear to explicitly teach removes from the memory the one of the batches to prevent the one of the batches and other of the batches from completely occupying all of the memory at a same time. In analogous art, Hwang teaches removes from the memory the one of the batches to prevent the one of the batches and other of the batches from completely occupying all of the memory at a same time (Para 0058, the agent selects an existing memory entry to delete when the memory becomes full). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Rossi to include the teaching of Hwang. One of ordinary skill in the art would be motivated to implement this modification in order to process large amounts of data, as taught by Hwang (Para 0006, Example embodiments provide a memory-based reinforcement learning method and system that may read/write a large amount of data without being limited by hardware by providing a memory-based reinforcement learning model capable of storing optional information in streaming data). Regarding claim 4, Rossi in view of Hwang teaches the computer system of claim 1, wherein the processor is further programmed to construct and utilize an experiment package object that, based on the output identifying the indices from the data package objects, generates the requests such that the batches that are sequentially loaded into and removed from the memory are from different ones of the data package objects (Rossi, Para 0029, One approach that the classifier 22 can use to classify the incoming data item 20 is using parallel maximum similarity classification, described in detail below with reference to FIGS. 2A-2B). Regarding claim 5, Rossi in view of Hwang teaches the computer system of claim 1, wherein each of the data package objects further contains metadata describing control limits (Rossi, Para 0048, As also described below, as the method described with reference to FIGS. 5A-5C takes into account the age of the data used to make the label assignment, the assignment can also serve as a prediction that the label 13 will remain the same for a certain period of time). Regarding claim 6, Rossi in view of Hwang teaches the computer system of claim 5, wherein each of the data package objects, responsive to the requests, further loads into the memory the metadata such that the machine learning model is trained subject to the control limits (Rossi, Para 0065, The parameters that maximize the previous objective function are then used for predicting the class labels of the nodes at time t+1. In other words, the parameters are tested using past temporal relational data and the parameters that result in the best accuracy are selected to predict the class labels at time t+1). Regarding claim 7, Rossi in view of Hwang teaches the computer system of claim 1, wherein the processor is further programmed to construct and utilize an experiment package object that generates the requests such that all of the batches from all of the data package objects are loaded into and removed from the memory in random order for multiple epochs of model training (Rossi, Para 0068, the servers 18 can adapt a decision tree learner, which can use a decision tree representation for weighting the features by the temporal influence of the edges and attributes, as described above and below, and a set of decision tree models can be learned sampling or randomization of the data representation). Regarding claim 8, Rossi in view of Hwang teaches the computer system of claim 1, wherein the processor is further programmed to construct and utilize a pipeline object that performs a predefined and configurable sequence of data processing operations on the signal data to generate the features for modeling (Rossi, Para 0069, FIGS. 2A-2B are flow diagrams showing a method 30 for parallel maximum similarity classification in accordance with one embodiment). Regarding claim 9, Rossi in view of Hwang teaches the computer system of claim 1, wherein the processor is further programmed to construct and utilize a model package object that contains the machine learning model and a taxonomy of all parameters required to reconstruct the machine learning model after training (Rossi, Para 0041, The classifier 22 identifies a neighborhood of vertices representing training data items 12 that are within a certain distance of the vertex v representing the incoming data item 20 that is being classified). Regarding claim 10, Rossi in view of Hwang teaches the computer system of claim 1, wherein the processor is further programmed to save the data package objects, experiment package objects, pipeline objects, or model package objects as serialized file objects that can be stored and loaded into the memory for re-use (Rossi, Para 0087, The training data items 12 are maintained in the database 11, each of the training data items associated with time series data 16 describing the connections 15 and the attributes 14 of those training data items through a plurality of time points (step 101)). Regarding claim 11, Rossi teaches an embedded system comprising: a hardware registry; and a microcontroller programmed to construct and utilize a plurality of data package objects that each contains signal data describing time-series values for parameters (Para 0007, the system and methods described below allow time series forecasting to be performed on multivariate time series represented as vertices in graphs with arbitrary structures as well as the prediction of a future class label for data points represented by vertices in a graph), organizes the signal data into batches having a size less than the hardware registry (Para 0007, The system and methods are well-suited for processing data in a streaming or online setting and naturally handle training data with skewed or unbalanced class labels), identifies the batches according to indices, and responsive to requests, provides output identifying the indices in randomly shuffled or arbitrary order (Para 0068, The use of the decision tree learner would provide decision tree models, such as random forests or bagged decision trees, which may be used for the prediction by averaging over the results given by the models), loads into the hardware registry one of the batches such that features of the signal data of the one of the batches can be used to train a machine learning model to predict time- series parameter outputs from time-series parameter inputs (Para 0010, predicting the label associated with the incoming data item based on the scores associated with the label at a future point of time). Rossi does not appear to explicitly teach removes from the hardware registry the one of the batches to prevent the one of the batches and other of the batches from completely occupying all of the hardware registry at a same time. In analogous art, Hwang teaches removes from the hardware registry the one of the batches to prevent the one of the batches and other of the batches from completely occupying all of the memory at a same time (Para 0058, the agent selects an existing memory entry to delete when the memory becomes full). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Rossi to include the teaching of Hwang. One of ordinary skill in the art would be motivated to implement this modification in order to process large amounts of data, as taught by Hwang (Para 0006, Example embodiments provide a memory-based reinforcement learning method and system that may read/write a large amount of data without being limited by hardware by providing a memory-based reinforcement learning model capable of storing optional information in streaming data). Claim 14 is the embedded system claim corresponding to the computer system claim 4, and is analyzed and rejected accordingly. Claim 15 is the embedded system claim corresponding to the computer system claim 5, and is analyzed and rejected accordingly. Claim 16 is the embedded system claim corresponding to the computer system claim 6, and is analyzed and rejected accordingly. Claim 17 is the embedded system claim corresponding to the computer system claim 7, and is analyzed and rejected accordingly. Claim 18 is the embedded system claim corresponding to the computer system claim 8, and is analyzed and rejected accordingly. Claim 19 is the embedded system claim corresponding to the computer system claim 9, and is analyzed and rejected accordingly. Claim 20 is the embedded system claim corresponding to the computer system claim 10, and is analyzed and rejected accordingly. Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Rossi in view of Hwang further in view of Pantaleano et al (US 20100082143 A1) hereafter Pantaleano Regarding claim 2, Rossi in view of Hwang teaches the computer system of claim 1, as shown above. Rossi in view of Hwang does not appear to explicitly teach wherein the signal data describes time-series values for parameters of manufacturing equipment. In analogous art, Pantaleano teaches wherein the signal data describes time-series values for parameters of manufacturing equipment (Para 0010, recording information during a product's manufacture, and correlating relevant historical data with the recorded information real-time). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Rossi in view of Hwang to include the teaching of Pantaleano. One of ordinary skill in the art would be motivated to implement this modification in order to improve manufacturing conditions, as taught by Pantaleano (Abs, Systems and methods for efficiently improving manufacturing conditions are presented herein). Claim 12 is the embedded system claim corresponding to the computer system claim 2, and is analyzed and rejected accordingly. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Rossi in view of Hwang further in view of Li et al (US 20200334526 A1) hereafter Li Regarding claim 3, Rossi in view of Hwang teaches the computer system of claim 1, as shown above. Rossi in view of Hwang does not appear to explicitly teach wherein the machine learning model is a sequence to sequence model. In analogous art, Li teaches wherein the machine learning model is a sequence to sequence model (Para 0024, The mechanisms disclosed herein can be used for classification and, when combined with other aspects, become part of a system that performs sequence to sequence conversion such as speech recognition, image captioning, machine translation, and so forth). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Rossi in view of Hwang. One of ordinary skill in the art would be motivated to implement this modification in order to improve machine learning performance, as taught by Li (Para 0026, improving the capacity of the network to capture long temporal context information). Claim 13 is the embedded system claim corresponding to the computer system claim 3, and is analyzed and rejected accordingly. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Brooks Hale whose telephone number is 571-272-0160. The examiner can normally be reached 9am to 5pm est. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sanjiv Shah can be reached on (571) 272-4098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.T.H./Examiner, Art Unit 2166 /SANJIV SHAH/Supervisory Patent Examiner, Art Unit 2166
Read full office action

Prosecution Timeline

Nov 17, 2022
Application Filed
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572584
DATA STORAGE METHOD AND APPARATUS BASED ON BLOCKCHAIN NETWORK
2y 5m to grant Granted Mar 10, 2026
Patent 12561344
CLASSIFICATION INCLUDING CORRELATION
2y 5m to grant Granted Feb 24, 2026
Patent 12561309
CORRELATION OF HETEROGENOUS MODELS FOR CAUSAL INFERENCE
2y 5m to grant Granted Feb 24, 2026
Patent 12561375
ENHANCED SEARCH RESULT GENERATION USING MULTI-DOCUMENT SUMMARIZATION
2y 5m to grant Granted Feb 24, 2026
Patent 12555669
SYSTEMS AND METHODS FOR GENERATING AN INTEGUMENTARY DYSFUNCTION NOURISHMENT PROGRAM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
49%
Grant Probability
80%
With Interview (+31.4%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 74 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month