Prosecution Insights
Last updated: April 19, 2026
Application No. 18/162,116

SYSTEMS AND METHODS FOR TRAINING PREDICTIVE MODELS ON SEQUENTIAL DATA USING 1-DIMENSIONAL CONVOLUTIONAL LAYERS IN A BLIND LEARNING APPROACH

Non-Final OA §101§103§112
Filed
Jan 31, 2023
Examiner
KWON, JUN
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
TripleBlind, Inc.
OA Round
1 (Non-Final)
38%
Grant Probability
At Risk
1-2
OA Rounds
4y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
26 granted / 68 resolved
-16.8% vs TC avg
Strong +46% interview lift
Without
With
+46.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
34 currently pending
Career history
102
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§101 §103 §112
Detailed Action Claims 1-20 currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 10 is objected to because of the following informalities: “The method of claim 1, further comprising:” should read “The method of claim 8, further comprising:”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “organizing training data into a two-dimensional format; … simulating a sequence model using a one-dimensional convolutional neural network;” It is unclear how the ‘one-dimensional convolutional neural network’ that can only process sequential data (one-dimensional data) processes the ‘organized training data’ (in two-dimensional format). For purpose of examination, the examiner interprets the limitation to mean: Two-dimensional data is converted to One-dimensional data to be processed by the Conv1D. Claims 1-7 and 10 depend from claim 1. Therefore, claims 1-7 and 10 inherit the same deficiency as of claim 1. Claim 8 is a method claim having similar limitation to claim 1. Therefore, claim 8 is rejected under the same rationale as of claim 1 above. Claim 9 depends from claim 8. Therefore, claim 9 inherit the same deficiency as of claim 8. Claim 11 is a system claim having similar limitation to claim 1. Therefore, claim 11 is rejected under the same rationale as of claim 1 above. Claims 12-20 depend from claim 1. Therefore, claims 12-20 inherit the same deficiency as of claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, 2A Prong 1: A method comprising: organizing training data into a two-dimensional format; (mental process of observation – merely recites data organization process that can be performed with the aid of pen and paper) normalizing the training data to yield normalized training data; (mental process of observation – merely recites data normalization process that can be performed with the aid of pen and paper) predicting, 2A Prong 2: simulating a sequence model using a one-dimensional convolutional neural network; (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f) – the limitation merely recites executing a Convolutional Neural network with sequential data [Paragraph 0132, 0135 and 0141]) collecting feature maps that result from previous layers in the one-dimensional convolutional neural network into a single layer; (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f) – the limitation merely recites processing input data using a layer in the convolutional neural network [Paragraph 0132 and 0135]) inputting an output from the single layer into a fully connected network; (mere instructions to apply an exception using a generic computer component – the limitation merely recites executing a Convolutional Neural network including a fully connected network with sequential data [Paragraph 0132 and 0141]) predicting, based on the fully connected network operating on the output of the single layer (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f)) The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity, combination of generic computer functions are implemented to perform the disclosed abstract idea above. 2B: simulating a sequence model using a one-dimensional convolutional neural network; (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f) – the limitation merely recites executing a Convolutional Neural network with sequential data [Paragraph 0132, 0135 and 0141]) collecting feature maps that result from previous layers in the one-dimensional convolutional neural network into a single layer; (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f) – the limitation merely recites processing input data using a layer in the convolutional neural network [Paragraph 0132 and 0135]) inputting an output from the single layer into a fully connected network; (mere instructions to apply an exception using a generic computer component – the limitation merely recites executing a Convolutional Neural network including a fully connected layer with sequential data [Paragraph 0132 and 0141]) predicting, based on the fully connected network operating on the output of the single layer (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f)) The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination of generic computer functions that are implemented to perform the disclosed abstract idea above. Regarding claim 2, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: wherein the two-dimensional format comprises a first dimension in time and a second dimension representing a feature. (a field of use and technological environment MPEP 2106.05(h) – limiting the format of the data to time and a feature) 2B: wherein the two-dimensional format comprises a first dimension in time and a second dimension representing a feature. (a field of use and technological environment MPEP 2106.05(h) – limiting the format of the data to time and a feature) Regarding claim 3, 2A Prong 1: The method of claim 1, wherein the normalizing of the training data normalizes the training data into a range between and including [-1, 1]. (mental process of observation – merely recites data normalization process that can be performed with the aid of pen and paper) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 4, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: wherein the one-dimensional convolutional neural network comprises a Conv1D convolutional neural network. (a field of use and technological environment MPEP 2106.05(h) – limiting the type of the CNN to a Conv1D CNN) 2B: wherein the one-dimensional convolutional neural network comprises a Conv1D convolutional neural network. (a field of use and technological environment MPEP 2106.05(h) – limiting the type of the CNN to a Conv1D CNN) Regarding claim 5, 2A Prong 1: The method of claim 1, further comprising: selecting a time window over the training data, wherein the time window covers a plurality of rows in the training data. (a mental process of evaluation – anyone who knows the art can manually select time windows from a training data) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 6, 2A Prong 1: Incorporates the rejection of claim 5. 2A Prong 2: wherein the time window is one of static and dynamic. (a field of use and technological environment MPEP 2106.05(h)) 2B: wherein the time window is one of static and dynamic. (a field of use and technological environment MPEP 2106.05(h)) Regarding claim 7, 2A Prong 1: Incorporates the rejection of claim 1, 2A Prong 2: wherein the training data comprises time series data. (a field of use and technological environment MPEP 2106.05(h)) 2B: wherein the training data comprises time series data. (a field of use and technological environment MPEP 2106.05(h)) Regarding claim 8, 2A Prong 1: A method comprising: organizing training data into a two-dimensional format; (mental process of observation – merely recites data organization process that can be performed with the aid of pen and paper) normalizing the training data to yield normalized training data; (mental process of observation – merely recites data normalization process that can be performed with the aid of pen and paper) predicting, 2A Prong 2: training a convolutional neural network on the normalized training data to yield a trained convolutional neural network; (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f) – generic training process of a neural network model) predicting, based on input data to the trained convolutional neural network (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f)) The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity, combination of generic computer functions are implemented to perform the disclosed abstract idea above. 2B: training a convolutional neural network on the normalized training data to yield a trained convolutional neural network; (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f) – generic training process of a neural network model) predicting, based on input data to the trained convolutional neural network (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f)) The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination of generic computer functions that are implemented to perform the disclosed abstract idea above. Regarding claim 9, 2A Prong 1: Incorporates the rejection of claim 8. 2A Prong 2: wherein the training data comprises time series data. (a field of use and technological environment MPEP 2106.05(h)) 2B: wherein the training data comprises time series data. (a field of use and technological environment MPEP 2106.05(h)) Claim 10 is a method claim having similar limitation to claim 5. Therefore, claim 10 is rejected under the same rationale as of claim 5 above. Regarding claim 11, 2A Prong 1: Claim 11 is a system claim having similar limitation to claim 1. Therefore, claim 11 is rejected under the same rationale as of claim 1 above. 2A Prong 2: A system comprising: a processor; and a computer-readable storage device storing instructions which, when executed by the processor, cause the processor to perform operations comprising: (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f)) 2B: A system comprising: a processor; and a computer-readable storage device storing instructions which, when executed by the processor, cause the processor to perform operations comprising: (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f)) Claim 12 is a system claim having similar limitation to claim 2. Therefore, claim 12 is rejected under the same rationale as of claim 2 above. Claim 13 is a system claim having similar limitation to claim 3. Therefore, claim 13 is rejected under the same rationale as of claim 3 above. Claim 14 is a system claim having similar limitation to claim 4. Therefore, claim 14 is rejected under the same rationale as of claim 4 above. Regarding claim 15, 2A Prong 1: Claim 15 is a system claim having similar limitation to claim 5. Therefore, claim 15 is rejected under the same rationale as of claim 5 above. 2A Prong 2: The system of claim 11, wherein the computer-readable storage device stores additional instructions which, when executed by the processor, cause the processor to perform operations further comprising: (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f)) 2B: The system of claim 11, wherein the computer-readable storage device stores additional instructions which, when executed by the processor, cause the processor to perform operations further comprising: (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f)) Claim 16 is a system claim having similar limitation to claim 6. Therefore, claim 16 is rejected under the same rationale as of claim 6 above. Claim 17 is a system claim having similar limitation to claim 7. Therefore, claim 17 is rejected under the same rationale as of claim 7 above. Regarding claim 18, 2A Prong 1: The system of claim 11, wherein normalizing the training data occurs using min-max normalization. (mental process of observation – merely recites data normalization (scaling) process that can be performed with the aid of pen and paper) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 19, 2A Prong 1: The system of claim 11, wherein a same padding is used to keep a size of the input data unchanged through the one-dimensional convolutional neural network. (mental process of evaluation – adding additional data (padding) into the data matrix can be done with the aid of a pen and paper) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 20, 2A Prong 1: The system of claim 11, wherein normalizing data further comprises normalizing to a negative lower value and a positive higher value. (mental process of observation – merely recites data normalization (scaling) process that can be performed with the aid of pen and paper) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4, 7-12, 14, and 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Abuadbba et al. (Abuadbba et al, “Can We Use Split Learning on 1D CNN Models for Privacy Preserving Training?”, 2020, hereinafter ‘Abuadbba’) in view of Mattioli et al. (Mattioli et al., “A 1D CNN for high accuracy classification and transfer learning in motor imagery EEG-based brain-computer interface”, 2021, hereinafter ‘Mattioli’). Regarding claim 1, Abuadbba teaches: A method comprising: normalizing the training data to yield normalized training data; ([Abuadbba, page 307, right col, 3.1.1 ECG Dataset and Preprocessing, line 9 – 11] discloses normalizing the sample and feeding them to the 1D CNN) simulating a sequence model using a one-dimensional convolutional neural network; ([Abuadbba, page 307, right col, 3.1.1 ECG Dataset and Preprocessing, line 9 – 11] discloses normalizing the sample and feeding them to the 1D CNN. [Abuadbba, page 308, right col, 3.2 Splitting 1D CNN, line 5 – page 309, left col, line 11] discloses splitting the 1DCNN to a client side and a server side, and then propagating the output of the client (the propagation is interpreted as ‘simulating’) to the first hidden layer of the server) collecting feature maps that result from previous layers in the one-dimensional convolutional neural network into a single layer; ([Abuadbba, page 307, right col, 3.1.1 ECG Dataset and Preprocessing, line 9 – 11] discloses normalizing the sample and feeding them to the 1D CNN. [Abuadbba, page 308, right col, 3.2 Splitting 1D CNN, line 5 – page 309, left col, line 11] discloses splitting the 1DCNN to a client side and a server side, and then propagating the output of the client to the first hidden layer of the server. [Abuadbba, page 309, left col, line 1 - right col, 3.2.2 Server, line 1-4] discloses forward propagating in i-th layer of client side layers, sending the activation a^l from the l-th layer of the client side to the server, and then the server continues forward propagation after receiving the activation. [Abuadbba, page 308, left col, Figure 3; page 3.2.3 Influence on Performance, line 1-7] discloses that a single layer in the server receives the activation) inputting an output from the single layer into a fully connected network; and ([Abuadbba, page 307, right col, 3.1.2 1D CNN Model Architecture, line 1 – page 308, line 4; Figure 3] discloses that the 1D CNN model contains at least two Fully Connected layers and propagates input signal from 1D Convolution layer side and generates output from Softmax layer side. [Abuadbba, page 308, right col, 3.2.1 Client, line 1-6 and right col, 3.2.2 Server, line 1-4] discloses that the first l layers are hold by the client where other layers are hold by the server, which indicates that the last 2 Fully Connected layers that processes the output from the client side are hold by the server side. [Abuadbba, page 308, left col, Figure 3; page 3.2.3 Influence on Performance, line 1-7] discloses that a single layer in the server receives the activation) predicting, based on the fully connected network operating on the output of the single layer, a target value associated with the training data. ([Abuadbba, page 308, left col, Figure 2 and 3] shows fully connected layers included in the 1DCNN. [Abuadbba, page 309, right col, 3.2.2 Server, line 1-9] discloses processing the received data from the client side, and then calculating (predict) the activated output from the last layer, which is interpreted as the target value associated with the training data. The loss is calculated based on the label received from the client. [Abuadbba, page 310, left col, 3.2.3 Influence on Performance, line 1-7] shows that the two conv layers OR three conv layers 1D CNN model were used to generate the target value) However, Abuadbba does not specifically disclose: organizing training data into a two-dimensional format; Mattioli teaches: organizing training data into a two-dimensional format; ([Mattioli, page 5, left col, 2.3. One-dimensional convolutional neural network (1D-CNN), line 1 – right col, line 12] discloses processing the 2D data using the 1D-CNN. [Mattioli, page 3, left col, 2.1. Dataset and ROIs, line 3 - bottom] and [Mattioli, page 4, Figure 1] collectively disclose that the ROIs are two-dimensional time-series data where each data point collected from different locations (channels) ) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Abuadbba and Mattioli to use the method of organizing training data into a two-dimensional format and processing the n-dimensional data using 1DCNN of Mattioli to implement the machine learning model prediction method of Abuadbba. The suggestion and/or motivation for doing so is to improve the performance of the prediction method by reducing the dimension of the input data and thus avoiding overfitting. Regarding claim 2, Abuadbba in view of Mattioli teaches: The method of claim 1, wherein the two-dimensional format comprises a first dimension in time and a second dimension representing a feature. ([Mattioli, page 5, left col, 2.3. One-dimensional convolutional neural network (1D-CNN), line 1 – right col, line 12] discloses processing the 2D data using the 1D-CNN. The input matrix dimension is M x N where M is the length of the time window considered (time) and N is the number of EEG channels (features) ) Regarding claim 4, Abuadbba teaches: The method of claim 1, wherein the one-dimensional convolutional neural network comprises a Conv1D convolutional neural network. ([Abuadbba, page 307, right col, 3.1.1 ECG Dataset and Preprocessing, line 9 – 11] discloses normalizing the sample and feeding them to the 1D CNN. [Abuadbba, page 308, right col, 3.2 Splitting 1D CNN, line 5 – page 309, left col, line 11] discloses splitting the 1DCNN to a client side and a server side, and then propagating the output of the client to the first hidden layer of the server) Regarding claim 7, Abuadbba teaches: The method of claim 1, wherein the training data comprises time series data. ([Abuadbba, page 311, left col, 4.4 Dynamic Time Warping (DTW), line 7-10] and [Abuadbba, pave 315, right col, 8 CONCLUSION, line 1-3] collectively discloses that the training data comprises time series data) Regarding claim 8, Abuadbba teaches: A method comprising: normalizing the training data to yield normalized training data; ([Abuadbba, page 307, right col, 3.1.1 ECG Dataset and Preprocessing, line 9 – 11] discloses normalizing the sample and feeding them to the 1D CNN) training a convolutional neural network on the normalized training data to yield a trained convolutional neural network; and ([Abuadbba, page 307, right col, 3.1.1 ECG Dataset and Preprocessing, line 9 – 11] discloses normalizing the sample and feeding them to the 1D CNN. [Abuadbba, page 309, right col, 3.2.2 Server, line 1-9] discloses processing the received data from the client side, and then calculating (predict) the activated output from the last layer, which is interpreted as the target value associated with the training data. The loss is calculated based on the label received from the client to train the 1DCNN) predicting, based on input data to the trained convolutional neural network, a target value associated with the training data. ([Abuadbba, page 308, left col, Figure 2 and 3] shows fully connected layers included in the 1DCNN. [Abuadbba, page 309, right col, 3.2.2 Server, line 1-9] discloses processing the received data from the client side, and then calculating (predict) the activated output from the last layer, which is interpreted as the target value associated with the training data. The loss is calculated based on the label received from the client. [Abuadbba, page 310, left col, 3.2.3 Influence on Performance, line 1-7] shows that the two conv layers OR three conv layers 1D CNN model were used to generate the target value) Abuadbba does not specifically disclose: organizing training data into a two-dimensional format; Mattioli teaches: organizing training data into a two-dimensional format; ([Mattioli, page 5, left col, 2.3. One-dimensional convolutional neural network (1D-CNN), line 1 – right col, line 12] discloses processing the 2D data using the 1D-CNN. [Mattioli, page 3, left col, 2.1. Dataset and ROIs, line 3 - bottom] and [Mattioli, page 4, Figure 1] collectively disclose that the ROIs are two-dimensional time-series data where each data point collected from different locations (channels) ) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Abuadbba and Mattioli to use the method of organizing training data into a two-dimensional format and processing the n-dimensional data using 1DCNN of Mattioli to implement the machine learning model prediction method of Abuadbba. The suggestion and/or motivation for doing so is to improve the performance of the prediction method by reducing the dimension of the input data and thus avoiding overfitting. Regarding claim 9, Abuadbba teaches: The method of claim 8, wherein the training data comprises time series data. ([Abuadbba, page 311, left col, 4.4 Dynamic Time Warping (DTW), line 7-10] and [Abuadbba, pave 315, right col, 8 CONCLUSION, line 1-3] collectively discloses that the training data comprises time series data) Claim 10 is a method claim having similar limitation to claim 5. Therefore, claim 10 is rejected under the same rationale as of claim 5 above. Regarding claim 11, Abuadbba teaches: A system comprising: a processor; and a computer-readable storage device storing instructions which, when executed by the processor, cause the processor to perform operations comprising: ([Abuadbba, page 307, left col, 2.2 Split Learning, line 1-4 and right col, 3.1.1 ECG Dataset and Preprocessing, line 1-3] indicates that the training is performed on the client and the server using MIT-BIH dataset available online. The server is a computer that contains a computer-readable storage device storing instructions and using processor) Claim 11 is a system claim having similar limitation to claim 1. Therefore, claim 11 is rejected under the same rationale as of claim 1 above. Claim 12 is a system claim having similar limitation to claim 2. Therefore, claim 12 is rejected under the same rationale as of claim 2 above. Claim 14 is a system claim having similar limitation to claim 4. Therefore, claim 14 is rejected under the same rationale as of claim 4 above. Claim 17 is a system claim having similar limitation to claim 7. Therefore, claim 17 is rejected under the same rationale as of claim 7 above. Regarding claim 19, Abuadbba teaches: The system of claim 11, wherein a same padding is used to keep a size of the input data unchanged through the one-dimensional convolutional neural network. ([Abuadbba, page 307, right col, 3.1.2 1D CNN Model Architecture, line 4-5; page 311, right col, 5.1 Adding More Hidden Layers, line 4-6] discloses adding Zero padding to keep the size of the input data unchanged. The ‘same padding’ is achieved by adding zero to the original data which is the same as the ‘zero padding’) Claims 3, 13, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Abuadbba in view of Mattioli and further in view of Lee (US 20200249651 A1, hereinafter ‘Lee’) Regarding claim 3, Abuadbba in view of Mattioli teaches: the method of claim 1, wherein the normalizing of the training data ([Mattioli, page 4, right col, line 12-22] discloses processing the training datasets for each ROI using a min-max normalization) However, Abuadbba in view of Mattioli does not specifically disclose: wherein the normalizing of the training data normalizes the training data into a range between and including [-1, 1]. Lee teaches: wherein the normalizing of the training data normalizes the training data into a range between and including [-1, 1]. ([Lee, 0057] discloses normalizing the data sample into a range of (-1,1) ) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Abuadbba, Mattioli and Lee to use the method of normalizing the training data into a specific range of Lee to implement the machine learning model prediction method of Abuadbba. The suggestion and/or motivation for doing so is to improve the performance of the prediction method by organizing the input data and make it easier for the model to identify patterns in the input data. Claim 13 is a system claim having similar limitation to claim 3. Therefore, claim 13 is rejected under the same rationale as of claim 3 above. Regarding claim 18, Abuadbba in view of Mattioli and further in view of Lee teaches: The system of claim 11, wherein normalizing the training data occurs using min-max normalization. ([Lee, 0057] discloses normalizing the data sample into a range of (-1,1). [Lee, 0060] discloses pre-training the machine learning model using the big dataset 821) Regarding claim 20, Abuadbba in view of Mattioli and further in view of Lee teaches: The system of claim 11, wherein normalizing data further comprises normalizing to a negative lower value and a positive higher value. ([Lee, 0057] discloses normalizing the data sample into a range of (-1,1) ) Claims 5-6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Abuadbba in view of Mattioli and further in view of Unnikrishnan et al. (US 20220139556 A1, hereinafter ‘Unnikrishnan’). Regarding claim 5, Abuadbba in view of Mattioli teaches: The method of claim 1, further comprising: a time window over training data, wherein the time window covers a plurality of rows in the training data ([Mattioli, page 5, left col, 2.3. One-dimensional convolutional neural network (1D-CNN), line 1 – right col, line 12] discloses processing the 2D data using the 1D-CNN. The input matrix dimension is M x N where M is the length of the time window considered (time) and N is the number of EEG channels (features) ) However, Abuadbba in view of Mattioli does not specifically disclose: further comprising: selecting a time window over the training data, wherein the time window covers a plurality of rows in the training data. Unnikrishnan teaches: further comprising: selecting a time window over the training data, wherein the time window covers a plurality of rows in the training data. ([Unnikrishnan, 0034] discloses selecting a particular time window that contains “w” number of rows from the training data) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Abuadbba, Mattioli and Unnikrishnan to use the method of selecting a time window over training data of Unnikrishnan to implement the machine learning model prediction method of Abuadbba. The suggestion and/or motivation for doing so is to improve the efficiency of the prediction method by reducing the size of the training data by selecting a subset of the training data. Regarding claim 6, Abuadbba in view of Mattioli and further in view of Unnikrishnan teaches: The method of claim 5, wherein the time window is one of static and dynamic. ([Unnikrishnan, 0034] discloses selecting a particular time window that contains “w” number of rows from the training data. It can be interpreted as ‘static’ as the selected time window does not change after the selection, but it can also be interpreted as ‘dynamic’ as it may be selected arbitrarily) Regarding claim 15, Abuadbba in view of Mattioli teaches: The system of claim 11, wherein the computer-readable storage device stores additional instructions which, when executed by the processor, cause the processor to perform operations further comprising: ([Abuadbba, page 307, left col, 2.2 Split Learning, line 1-4 and right col, 3.1.1 ECG Dataset and Preprocessing, line 1-3] indicates that the training is performed on the client and the server using MIT-BIH dataset available online. The server is a computer that contains a computer-readable storage device storing instructions and using processor) Claim 15 is a system claim having similar limitation to claim 5. Therefore, claim 15 is rejected under the same rationale as of claim 5 above. Claim 16 is a system claim having similar limitation to claim 6. Therefore, claim 16 is rejected under the same rationale as of claim 6 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Azizjon et al. “1D CNN based network intrusion detection with normalization on imbalanced data”, 2020 (This prior art teaches normalizing input data, and processing the normalized data using 1D CNN) Zihao et al., “A Time Series Classification Method Based on 1DCNN-FNN”, 2021 (This prior art teaches preprocessing multi-dimensional data to 1-dimensional data and processing the preprocessed data using a 1DCNN) Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUN KWON whose telephone number is (571)272-2072. The examiner can normally be reached Monday – Friday 7:30AM – 4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUN KWON/Examiner, Art Unit 2127 /ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Jan 31, 2023
Application Filed
Oct 22, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602569
EXTRACTING ENTITY RELATIONSHIPS FROM DIGITAL DOCUMENTS UTILIZING MULTI-VIEW NEURAL NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602609
UPDATING MACHINE LEARNING TRAINING DATA USING GRAPHICAL INPUTS
2y 5m to grant Granted Apr 14, 2026
Patent 12579436
Tensorized LSTM with Adaptive Shared Memory for Learning Trends in Multivariate Time Series
2y 5m to grant Granted Mar 17, 2026
Patent 12572777
Policy-Based Control of Multimodal Machine Learning Model via Activation Analysis
2y 5m to grant Granted Mar 10, 2026
Patent 12493772
LAYERED MULTI-PROMPT ENGINEERING FOR PRE-TRAINED LARGE LANGUAGE MODELS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
38%
Grant Probability
84%
With Interview (+46.2%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month