DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is in response to the Amendment filed 10/10/2025.
Response to Arguments
4. Claims 1 – 20 are pending in this Office Action. After a further search and a thorough examination of the present application, claims 1 – 20 are rejected.
5. Applicant's arguments filed with respect to claims 1 – 20 have been fully considered but they are moot in view of new rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
Claims 1 – 20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Kakuda et al. (US 20240394603 A1) (‘Kakuda’ herein after) further in view of Carranza et al. (US 20190325348 A1) (‘Carranza’ herein after).
With respect to claim 1, 17, 20,
Kakuda discloses a method comprising: receiving a representation of a parameter (figure 1 #111 input data, Kakuda) of a first clustering model (figure 1 #121 Kakuda), wherein each of the first clustering model (figure 1 #121, Kakuda) and the representation of the parameter is associated with training data (parameter figure 1 #111 input data, the training data, paragraphs 10, 23 teach a clustering model configured to generate a trained clustering model when a training dataset is input, and classify the training dataset into N clusters, Kakuda); based on the parameter, generating a second clustering model (figure 1, #131 trained clustering model is equivalent to the second clustering model, paragraphs 10, 23, Kakuda); providing, to the second clustering model, a prediction request (figure 1, #design condition x and paragraph 15 – 17, 37 – 39 teaches acquiring data used for the predicting; identifying using the trained clustering model generated by the program of generating a model for predicting a material characteristic, determining, when the data used for the predicting is input, Kakuda) and generating, by using the second clustering model (figure 1, #131 trained clustering model is equivalent to the second clustering model, paragraphs 10, 23, Kakuda), a prediction result based on the prediction request (figure 1, #characteristic value y and paragraph 69 teaches a response variable (characteristic value) corresponding to the explanatory variable (design condition) used as the input is obtained as output data to output characteristic value, paragraph 103 – 105 the prediction device outputs the predicted characteristic value, as prediction data for the input data (design condition x) of the prediction target, Kakuda).
Kakuda teaches prediction model generating device trains the clustering model and the prediction model using a training dataset stored in a material data storage part and generates a trained clustering model and a trained prediction model but does not teach this to be explicitly claimed as in accordance with software libraries.
However, Carranza teaches using data for training in accordance with software libraries in figure 1 #104, #106, paragraph 11, 24 teach machine programming solver to utilize machine learning models to process generated code, determine the algorithms (e.g., methods, classes, sub-sections of code, etc.) of the code, and generate recommendations to replace one or more of the algorithms of the code with algorithms that are more efficient for a particular parameter (e.g., speed, memory, resources, security, etc.) and better suited for the purpose of the new code. During training, examples disclosed herein utilize feature vectors (e.g., representative of algorithms) from libraries (e.g., stored locally at the computing device or at an external computing device and/or server) and internal code to generate clusters of feature vectors representative of algorithms, where each cluster corresponds to a similar functionality (e.g., a sorting cluster, a transmission cluster, a searching cluster, etc.). The blocks of code may be provided by the example libraries, internal code, and parts of new code that does not correspond to a cluster. The internal code 106 may be internal repositories (e.g., git repositories) with optimized implementation, internal documentation (e.g., apache libraries), etc. stored within the processing device.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Carranza with the teachings of Kakuda it incorporates the advantage of using the clustering training models for different data including code. Furthermore the combination also brings updated training data to keep the clustering models dynamically trained and up-to-date.
With respect to claim 2, 18,
Kakuda as modified discloses the method of claim 1, wherein the second clustering model is operative to make predictions using the parameter (figure 1(b) #131 trained clustering model is the second clustering model used to make predictions, paragraphs 14 – 17, Kakuda).
With respect to claim 3, 19,
Kakuda as modified discloses the method of claim 1, wherein each of the first clustering model and the representation of the parameter is determined using the training data (figure 1 #111 and #121, paragraphs 23 – 24 teaches a clustering model configured to generate a trained clustering model when a training dataset is input, and classify the training dataset into N clusters and a weight defining part configured to calculate a distance between centroids of the clusters of the classified clusters, and a weight between the clusters, using both the calculated distance between the centroids of the clusters and a parameter representing a feature of the training dataset, Kakuda).
With respect to claim 4,
Kakuda as modified discloses the method of claim 1, wherein each of the first clustering model and the representation of the parameter was created in a training environment by applying a training algorithm to the training data using the first set of software libraries (figure 1 #111, #121 paragraphs 23 – 25, Kakuda).
With respect to claim 5,
Kakuda as modified discloses the method of claim 4, wherein generating the second clustering model does not involve applying the training algorithm to the training data (figure 1 (b) #131 the second clustering model is already a trained clustering model and does not involve applying the training algorithm, Kakuda).
With respect to claim 6,
Kakuda as modified discloses the method of claim 1, wherein the second clustering model executes in a prediction environment using the second set of software libraries (figure 1 (b) the second clustering model #131 is executed in a prediction device environment, Kakuda and paragraphs 18, 25 – 27, Carranza).
With respect to claim 7,
Kakuda as modified discloses the method of claim 1, wherein generating the second clustering model comprises loading the parameter into the second clustering model (figure 1 #131 the second clustering model which is the trained clustering model is generated by using the input data parameters and the training data into the first clustering model and then generating the trained clustering model which is the second clustering model, Kakuda).
With respect to claim 8,
Kakuda as modified discloses the method of claim 1, wherein the parameter defines, for a cluster in the first clustering model and in the second clustering model, a centroid of the cluster in an n-dimensional space or a distance from a boundary of the cluster to the centroid in the n-dimensional space (figures 1, 3 and paragraphs 23 – 25 and 61 – 63 teach a clustering model configured to generate a trained clustering model when a training dataset is input, classify the training dataset into N clusters and a weight defining part configured to calculate a distance between centroids of the clusters of the classified clusters, weight between the clusters, using both the calculated distance between the centroids of clusters and a parameter representing a feature of the training data and a prediction model configured to generate respective trained prediction models using the clusters and weight, Kakuda).
With respect to claim 9,
Kakuda as modified discloses the method of claim 1, wherein the first set of software libraries is different from the second set of software libraries (paragraphs 18, 25 – 27 and 35 – 36, Carranza).
With respect to claim 10,
Kakuda as modified discloses the method of claim 1, wherein the first clustering model is based on k-means clustering, Gaussian mixture model clustering, density-based spatial clustering of applications with noise, or ordering points to identify a clustering structure (paragraphs 18, 60, 142, Kakuda).
With respect to claim 11,
Kakuda as modified discloses the method of claim 1, wherein the parameter is one of a plurality of parameters of the first clustering model, and wherein the first clustering model was generated based on determining the plurality of parameters by applying a training algorithm to the training data using the first set of software libraries (figures 1, 3, paragraphs 142 – 143, Kakuda).
With respect to claim 12,
Kakuda as modified discloses the method of claim 1 further comprising: receiving second training data; updating the second clustering model based on the parameter and the second training data in accordance with the second set of software libraries; providing, to the second clustering model as updated, a second prediction request and generating, by using the second clustering model as updated, a second prediction result based on the second prediction request (paragraphs 18, 25 – 27 teach updating the training data to include new code stating periodically, aperiodically, and/or based on a trigger (e.g., when new data has been added to the example training database), the example feature extractor may extract features from the blocks of code stored in the example training database to update the example cluster MLM and/or the example recommender MLM based on new training data stored in the example training database, Carranza and figure 1, paragraphs 10, 15 – 17, 23, Kakuda).
With respect to claim 13,
Kakuda as modified discloses the method of claim 12, wherein updating the second clustering model based on the parameter and the second training data comprises adjusting sizes of one or more clusters defined by the second clustering model or assignments of objects to the one or more clusters defined by the second clustering model (paragraphs 18, 25 – 27 and 35 – 36, Carranza).
With respect to claim 14,
Kakuda as modified discloses the method of claim 13, further comprising: receiving a representation of a second parameter of the second clustering model as updated, wherein the second parameter is in accordance with the second set of software libraries and updating the first clustering model based on the second parameter in accordance with the first set of software libraries (paragraphs 18, 25 – 27 teach updating the training data to include new code stating periodically, aperiodically, and/or based on a trigger (e.g., when new data has been added to the example training database), the example feature extractor may extract features from the blocks of code stored in the example training database to update the example cluster MLM and/or the example recommender MLM based on new training data stored in the example training database, Carranza and figure 1, paragraphs 10, 15 – 17, 23, Kakuda).
With respect to claim 15,
Kakuda as modified discloses the method of claim 1, wherein the first set of software libraries is based on a first programming language and the second set of software libraries is based on a second programming language (paragraphs 18, 25 – 27 and 35 – 36, Carranza).
With respect to claim 16,
Kakuda as modified discloses the method of claim 15, wherein the first programming language is interpreted and dynamically typed, and wherein the second programming language is compiled and statically typed (paragraphs 18, 25 – 27 and 35 – 36, Carranza).
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20210109906 A1 teaches specifying the job identifier to a clustering analysis application causing the analysis configuration, the clustering models and the data regularize to load into the clustering analysis application and receiving a plurality of scores resulting from a cluster analysis performed by the clustering analysis application based on the job identifier.
US 20030212713 A1 teaches in-database clustering comprises a first data table and a second data table, each data table including a plurality of rows of data, means for building a clustering model using the first data table using a portion of the first data table, wherein the portion of the first data table is selected by partitioning, density summarization, or active sampling of the first data table, and means for applying the clustering model using the second data table to generate apply output data.
US 20180365249 A1 teaches data clustering in a model property vector space. Input data is received comprising a plurality of data instances in a data vector space. An output is generated comprising a plurality of data segments and one or more clustering rules. For each data cluster, a predictive model is constructed for each data segment of the plurality of data segments.
US 20170154280 A1 teaches incremental generation of models with dynamic clustering. A first set of data is received. A first set of clusters based on the first set of data is generated. A respective first set of models for the first set of clusters is created. A second set of data is received. A second set of clusters, based on the second set of data and based on a subset of the first set of data, is generated. A respective second set of models for the second set of clusters, based on a subset of the first set of models and based on the second set of data, is created.
US 20230004842 A1 teaches retrieve a set of nodes and execute various clustering algorithms in order to segment the nodes into different clusters. The systems and methods described herein also describe generating one or more prediction models, such as time-series models, for each cluster of nodes. When a node with unknown/limited data and attributes is identified, the methods and systems described herein first identify a cluster most similar the new node, identify a corresponding prediction model, and execute the identified prediction model to calculate future attribute of the new node.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAVNEET K GMAHL whose telephone number is (571)272-5636.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SANJIV SHAH can be reached on (571) 272-4098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NAVNEET GMAHL/Examiner, Art Unit 2166 Dated: 2/2/2026
/SANJIV SHAH/Supervisory Patent Examiner, Art Unit 2166