Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
2. This office action is in response to the amendment filed on 10/29/2025. Claim 1-20 are pending and have been considered below.
Claim Rejections - 35 USC § 102
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
4. Claim(s) 1-2, 6-9, 13-16 and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dang et al. (US 2018/0293488).
Claim 1. Dang discloses a method for parallel predictive modelling, the method comprising:
receiving a configuration file associated with a predictive concept at a production layer of a predictive modelling platform (The input review data 102(2) may be distributed to the various layers for parallel processing, to save time and/or more efficiently use computing resources.") ([0049]), the predictive modelling platform comprising the production layer and a consumption layer ("As shown in the example, the structure may include convolution pooling layers 402 at varying levels of specificity. For example, the layers 402 may include a phrase feature map group, a sentence feature map group, a paragraph feature map group, and/or a context feature map group.") ([0049], fig. 4), wherein the production layer and the consumption layer are communicatively connected by a distributed messaging system ("The input review data 102(2) may be distributed to the various layers for parallel processing, to save time and/or more efficiently use computing resources. A slacked layer may be generated as output from each layer 402.") ([0049], fig. 4);
pulling, by the consumption layer, data from one or more heterogeneous data storage units based on a data source location specified by the configuration file (The prediction engine 110 may retrieve the review data 102(1) and the review data 102(2) from the data storage 108… The data storage 108 may be any suitable type of data storage, such as a cloud service, data warehouse, distributed big data platform, relational and/or non-relational databases, and so forth) ([0036])..(…the data storage 108 may be external to the server computing device(s) 104, and accessible over one or more networks) ([0034])[wherein there must be some sort of directives in the engine regarding location and type of data];
identifying, by the production layer, a job request based on the configuration file (For example, the layers 402 may include a phrase feature map group, a sentence feature map group, a paragraph feature map group, and/or a context feature map group. The input review data 102(2) may be distributed to the various layers for parallel processing, to save time and/or more efficiently use computing resources.") ([0049], fig. 4);
sending, by the distributed messaging system, the job request to the consumption layer, as one of a plurality of job requests to be passed to a predictive model of a plurality of predictive models implemented by a processing container (For example, the layers 402 may include a phrase feature map group, a sentence feature map group, a paragraph feature map group, and/or a context feature map group. The input review data 102(2) may be distributed to the various layers for parallel processing, lo save time and/or more efficiently use computing resources.") ([0049], fig. 4)…( to train the predictive model(s)) ([0041]), wherein the predictive model of the plurality of predictive models is specified by the configuration file (Ratings posted with reviews 102(1) may be used to train one or more models 120. The model(s) 120 may be employed to predict ratings 122 for posted reviews 102(2) that are not initially associated with ratings.") ([0042], fig. 2);
obtaining, from the processing container, a forecast as an output of the predictive model (A stacked layer may be generated as output from each layer 402.) ([0049], fig. 4);
sending, by the distributed messaging system, the forecast to the production layer (Accordingly, one or more stacked convolutional and/or pooling layers, with the same or different structures, can be additionally inserted into the layers 402.) ([0049], fig. 4); and
determining, by the production layer, one or more values of the predictive concept based on the forecast and an operator (user moment), the operator specified by the configuration file (the combined features are reduced through over-fitting reduction 406, and provided to a (e.g., full) connection layer 408 which is then employed to generate the prediction results 122.) ([0050], fig. 4)…( a user moment is a characteristic or attribute of an individual that is related to their behavior online, such as their search habits, products or topics they have expressed an interest in, and so forth) ([0038])…( Incorporating user moment information into the prediction process can further improve the prediction accuracy) ([0069],[0071]).
Claim 2. Dang discloses the method of claim 1, wherein the predictive model is a deep learning model (Implementations provide for a prediction engine 110 that builds the prediction models 120 through distributed parallel model building, using deep learning that employs CNNs. With respect to distributed parallel model building, implementations provide a framework to build large scale models in parallel using deep learning technology) ([0038]).
Claim 6. Dang discloses the method of claim 1, wherein the configuration file specifies a feature set for the predictive model used to generate the one or more values of the predictive concept.
wherein the configuration file specifies a feature set for the predictive model used to generate the one or more values of the predictive concept (the layers 402 may include a phrase feature map group, a sentence feature map group, a paragraph feature map group, and/or a context feature map group.). ([0049], fig. 4).
Claim 7. Dang discloses the method of claim 1, further comprising:
generating, by the production layer, a control command comprising a parameter based on the determined one or more values of the predictive concept (The prediction engine 110 may build one or more prediction models 120, and use the prediction model(s) 120 to generate prediction results 122. The prediction results 122 may be stored on the server computing device(s) 104, and/or elsewhere, and may be transmitted to one or more prediction output devices 124 for display and use by data consumers, such as marketing professionals.) ([0037]); and
sending the control command, via a network to an external system (The prediction results 122 may be stored on the server computing device(s) 104, and/or elsewhere, and may be transmitted to one or more prediction output devices 124 for display and use by data consumers, such as marketing professionals.) ([0037]).
Claims 8-9, 13-16 and 20 represent the platform and medium of claims 1-2 and 6-7, respectively and are rejected along the same rationale.
Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 3, 5, 10, 12, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Dang et al. (US 2018/0293488) in view of Lin et al. (US 8,364,613).
Claim 3. Dang discloses the method of claim 1, but fails to explicitly disclose further comprising: identifying, by the production layer, a model training request based on the configuration file; sending, by the distributed messaging system, to the consumption layer, a request for updated training data from a data mapper provided by the predictive modelling platform; and training the predictive model based on the updated training data.
However, Lin discloses identifying, by the production layer, a model training request based on the configuration file (In some implementations, models are trained by a training system 416 which receives requests from the prediction API 408 to initiate training and check the status of training ... ) (Col. 10, ll.47-50); sending, by the distributed messaging system, to the consumption layer, a request for updated training data from a data mapper provided by the predictive modelling platform ("In some implementations, models are trained by a training system 416 which receives requests from the prediction API 408 to initiate training and check the status of training ... ") (Col. 10, ll.47-50)…(The prediction API 408 provides the training system 416 with the location of training data 320 to be used in training a particular model. For example, the training data, such as a range of cells in a spreadsheet, can be obtained from the application data 318 through use of the web application API 406 and then provided to the training system ... ") (Col 10, In 50-56); and training the predictive model based on the updated training data (Computer programs embodying such machine learning algorithms can be operable to: input previously trained predictive models and additional training data; implement the machine learning algorithm to generate an updated predictive model that is representative of the original training dataset and the additional training data; and output the updated predictive model in a suitable computer readable and executable format.") (Col. 4, ll. 46-53). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Dang to request a model training to update the training data, such that the system can identify new parameters associated with the training model, and update the model to reflect the changes.
Claim 5. Dang and Lin disclose the method of claim 3, Dang discloses wherein the one or more heterogeneous data storage units comprise a plurality of heterogeneous data storage units (The prediction engine 110 may retrieve the review data 102(1) and the review data 102(2) from the data storage 108… The data storage 108 may be any suitable type of data storage, such as a cloud service, data warehouse, distributed big data platform, relational and/or non-relational databases, and so forth) ([0036])..(…the data storage 108 may be external to the server computing device(s) 104, and accessible over one or more networks) ([0034])) and Lin further discloses wherein the data mapper comprises a data store for the predictive modelling platform (The servers can communicate with each other and with storage systems (e.g., application data storage system 318 and training data storage system 320) at various times using one or more computer networks or other communication means.) (Col 9, ln 64 - Col 10, ln 2), and wherein the data mapper assigns a plurality of connections between a plurality of heterogeneous data storage units (A map-reduce system includes application-independent map modules configured to read input data and to apply at least one applicationspecific map operation to the input data to produce intermediate data values. The map operation is automatically parallelized across multiple servers. Intermediate data structures are used to store the intermediate data values.) (Col 9, ln 39-45). One would have been motivated to do so such that the system can identify new parameters associated with the training model, and update the model to reflect the changes
Claims 10, 12, 17 and 19 represent the platform and medium of claims 3 and 5, respectively and are rejected along the same rationale.
7. Claims 4, 11 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Dang et al. (US 2018/0293488) in view of Lin et al. (US 8,364,613) and further in view of ELKABETZ et al. (US 2019/0339416).
Claim 4. Dang and Lin disclose the method of claim 3, but fail to explicitly disclose wherein the configuration file comprises a history of predictive models used to generate the one or more values of the predictive concept. However, ELKABETZ discloses the configuration file comprises a history of predictive models used to generate the one or more values of the predictive concept (The ML training module (678) retrieves an untrained, partially trained, or previously trained ML model from the system database (320), retrieves ML training data from the ML training data store (683), and uses the training data to train or retrain the ML model… ). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Dang and Lin to include the historical training models in the configuration files, such that the system can use previous training models to train or retrain new models.
Claims 11 and 18 represent the platform and medium of claims 4, respectively and are rejected along the same rationale.
Response to Arguments
8. Applicant’s arguments and amendments filed on 10/29/2025 have been fully considered but are moot in light of new ground of rejection(s).
Conclusion
9. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (See PTO-892).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Phenuel S. Salomon whose telephone number is (571) 270-1699. The examiner can normally be reached on Mon-Fri 7:00 A.M. to 4:00 P.M. (Alternate Friday Off) EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached on (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-3800.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHENUEL S SALOMON/Primary Examiner, Art Unit 2146