Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to papers filed on 12/18/2025.
Claims 1, 3, 8, 10, 15, and 17 have been amended.
Claims 2, 7, 9, 14, and 16 have been cancelled.
No claims have been added.
Claims 1, 3-6, 8, 10-13, 15, and 17-20 are pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/18/2025 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-6, 8, 10-13, 15, and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1:
The claims are directed to a process (method as introduced in Claim 1), and/or system (Claim 8), and/or non-transitory computer-readable storage medium with executable instructions (Claim 15), thus Claims 1, 3-8, 10-15, and 17-20 fall within one of the four statutory categories. See MPEP 2106.03.
Step 2A, Prong 1:
The claimed invention recites an abstract idea according to MPEP §2106.04. The independent claims which recite the following claim limitations as an abstract idea, are underlined below.
Claims 1, 8, and 15 recite (as represented by the language of Claim 1):
A computer-implemented method of using machine learning to assess suppliers of products, the method comprising:
training, by one or more computer processors, a plurality of intermediate machine learning models using a training dataset comprising:
(i) a training set of testing, inspection, and/or certification (TIC) data associated with a test set of suppliers each being associated with a test set of products, (ii) a training set of transactional data associated with the test set of suppliers, and (iii) a training set of customer data associated with the test set of suppliers, wherein each intermediate machine learning model is trained on a different respective data type in the training dataset;
training, by the one or more computer processors, an aggregate machine learning model to assess domain-specific outputs from the plurality of intermediate machine learning models, wherein a hierarchical structure of the plurality of intermediate machine learning models and the aggregate machine learning model enables analysis of different data types;
storing the plurality of intermediate machine learning models and the aggregate machine learning model in a memory;
accessing, by the one or more computer processors, information associated with a supplier, the information comprising (i) a set of TIC data associated with a set of products offered by the supplier, and (ii) a set of transactional data associated with the set of products;
analyzing, by the one or more computer processors using the plurality of intermediate machine learning models, the information associated with the supplier, resulting in a plurality of intermediate outputs, wherein each intermediate output is a domain-specific output corresponding to the respective data type on which the corresponding intermediate machine learning model was trained;
analyzing, by the one or more computer processors using the aggregate machine learning model, the plurality of intermediate outputs, comprising the plurality of domain-specific outputs, to generate a unified output comprising (i) a performance prediction for the supplier, and (ii) a recommendation related to a lifecycle of at least one product of the set of products offered by the supplier;
accessing, by the one or more computer processors, information indicative of implementation of the recommendation by the supplier, the information comprising at least a set of transactional data associated with the at least one product; and
updating, by the one or more computer processors, at least one of the intermediate machine learning models using the information, to reflect how the recommendation was employed, wherein the at least one of the intermediate machine learning models that was updated is used in at least one subsequent input data analysis.
The underlined claim limitations as emphasized above, as drafted, recite a process that, under its broadest reasonable interpretation covers the performance of commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; and/or business relations). Other than reciting a computer implementation, nothing in the claim elements precludes the step from encompassing the performance of commercial or legal interactions which represents the abstract idea of certain methods of organizing human activity. But for the recitation of generic implementation of computer system components, the claimed invention merely recites a process for making predictions and recommendation based on business relationship data (such as supplier and product data). For example, a user could make such predictions and recommendation by merely analyzing the business relationship data in their preferred manner and then update the manner in which predictions and recommendations are made (the method of analysis use do the data) based on feedback and results pertaining to the recommendation.
Step 2A, Prong 2:
This judicial exception is not integrated into a practical application. In particular, the claims recite additional elements such as:
A computer-implemented method of using machine learning to assess suppliers of products, the method comprising:
training, by one or more computer processors, a plurality of intermediate machine learning models using a training dataset comprising:
(i) a training set of testing, inspection, and/or certification (TIC) data associated with a test set of suppliers each being associated with a test set of products, (ii) a training set of transactional data associated with the test set of suppliers, and (iii) a training set of customer data associated with the test set of suppliers, wherein each intermediate machine learning model is trained on a different respective data type in the training dataset;
training, by the one or more computer processors, an aggregate machine learning model to assess domain-specific outputs from the plurality of intermediate machine learning models, wherein a hierarchical structure of the plurality of intermediate machine learning models and the aggregate machine learning model enables analysis of different data types;
storing the plurality of intermediate machine learning models and the aggregate machine learning model in a memory;
[accessing the information] by the one or more computer processors;
[analyzing the information or outputs] by the one or more computer processors using the plurality of intermediate machine learning models;
by the one or more computer processors, [updating] at least one of the intermediate machine learning models using the information, wherein the at least one of the intermediate machine learning models that was updated is used in at least one subsequent input data analysis.
In particular, the additional elements cited above beyond the abstract idea are recited at a high-level of generality and simply equivalent to a generic recitation and basic functionality that amount to no more than mere instructions to apply the judicial exception using generic computer technology components.
Accordingly, since the specification describes the additional elements in general terms, without describing the particulars, the additional elements may be broadly but reasonably construed as generic computing components being used to perform the judicial exception (see specification at [0092] and [0093]). These claimed additional elements merely recite the words “apply it" (or an equivalent) with the judicial exception, or merely include instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f).
Although there are multiple steps related to the use of machine learning models (ML), the particular descriptions of these ML are still recited at a high level of generality. For example, the training, use (analyzing), updating on feedback, and subsequent use are steps employed by ML in its expected functions as a tool for processing data. The recitation of multiple models does not provide any additional context that is significantly more than the abstract idea or significantly more than a tool to implement the abstract idea, as the individual models are recited in the same high-level of generality and perform the same expected functions. Merely reciting a plurality of ML (intermediate or aggregate) or the specific hierarchy/arrangement does not indicate any special use, practical application, improvement, etc. over the use of a single ML or other hierarchy/arrangement. Additionally, applying the ML to the particular type of data in the claims does not affect how it is used and/or indicate any significant effect on how it performs or functions.
Thus, the additional claim elements are not indicative of integration into a practical application, because the claims do not involve improvements to the functioning of a computer, or to any other technology or technical field (MPEP 2106.05(a)), the claims do not apply the abstract idea with, or by use of, a particular machine (MPEP 2106.05(b)), the claims do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)), and the claims do not apply or use the abstract idea in some other meaningful way beyond generally linking the use of the abstract idea to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (MPEP 2106.05(e)). Therefore, the claims do not, for example, purport to improve the functioning of a computer. Nor do they effect an improvement in any other technology or technical field. Accordingly, the additional elements do not impose any meaningful limits on practicing the abstract idea and the claims are directed to an abstract idea.
Step 2B:
The claims do not include additional elements, individually or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept at Step 2B. Thus, the claim is not patent eligible.
Dependent Claims:
Claims 3-6, 10-13, and 17-20 recite further elements related to the data analysis, prediction, and recommendation steps of the parent claims. These activities fail to differentiate the claims from the related activities in the parent claims and fail to provide any material to render the claimed invention to be significantly more than the identified abstract ideas, as outlined below.
Claims 3, 10, and 17 recite “wherein analyzing the plurality of intermediate outputs to output the recommendation comprises: outputting, by the aggregate machine learning model, information indicating a set of changes related to a development of the at least one product or a supply chain associated with the supplier”, which further specifies additional types of output data, but does not lead toward eligibility. In regards to Claim 10, merely applying the steps with the at least one processor is configured to execute the set of computer-readable instructions does not integrate the abstract idea into a practical application or provide an inventive concept.
Claims 4, 11, and 18, recite “wherein the training set of TIC data associated with the test set of suppliers comprises at least one of: a set of testing reports, a set of inspection reports, or a set of certification reports, and wherein the training dataset further comprises a training set of regulatory data”, which specifies specific types of training data, but does not lead toward eligibility. The additional types of data are part of the abstract idea and merely adding that data to the training data does not integrate the abstract idea into a practical application or provide an inventive concept.
Claim 5, 12, and 19 recite “wherein the training set of transactional data associated with the test set of suppliers comprises at least one of: sales amounts, sales quantities, returns amounts, or returns quantities”, which specifies specific types of training data, but does not lead toward eligibility. The additional types of data are part of the abstract idea and merely adding that data to the training data does not integrate the abstract idea into a practical application or provide an inventive concept.
Claim 6, 13, and 20 recite “wherein the training set of customer data associated with the test set of suppliers comprises at least one of: reviews information, complaints information, or ratings information”, which specifies specific types of training data, but does not lead toward eligibility. The additional types of data are part of the abstract idea and merely adding that data to the training data does not integrate the abstract idea into a practical application or provide an inventive concept.
The claims do not provide any new additional limitations or meaningful limits beyond abstract idea that are not addressed above in the independent claims therefore, they do not integrate the abstract idea into a practical application nor do they provide significantly more to the abstract idea. Thus, after considering all claim elements, both individually and as a whole, it has been determined that the claims do not integrate the judicial exception into a practical application or provide an inventive concept. Therefore, Claims 3-6, 10-13, and 17-20 are ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 3-6, 8, 10-13, 15, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moazzami et al. (Pub. No. US 2020/0042903 A1) in view of Dhingra et al. (WO 2021186466 A1) in further view of Dickens et al. (US 2022/0182464 A1).
In regards to Claim 1, 8, and 15, Moazzami discloses:
A computer-implemented method/system of using machine learning to assess suppliers of products comprising: a memory storing a set of computer-readable instructions and at least one processor interfacing with the memory, and configured to execute the set of computer-readable instructions to cause the at least one processor to: (see at least [0006]; [0010]; [0013])
training, by one or more computer processors, a plurality of intermediate machine learning models using a training dataset, wherein each intermediate machine learning model is trained on a different respective data type in the training dataset (Abstract; Fig. 3; [0079] “…providing input data to a plurality of base models (intermediate machine learning models) to generate a plurality of intermediate outputs…”, different data is used to train the different intermediate models; [0029]; [0076]; [0077]; Claim 3, “…wherein training the base classification algorithms comprises using different training data or different machine learning techniques to specialize the different base models differently.” [0029]; [0076]; [0077], the base models (and associated learners) are trained on different data to be specialized to specific data (“…trained using the training data for which the other base learners are weaker…”, “…choose the training samples for S.sub.i from D that minimize the mutual information between S.sub.i and the previously-generated model(s)…”))
training, by the one or more computer processors, an aggregate machine learning model to assess domain-specific outputs from the plurality of intermediate machine learning models, wherein a hierarchical structure of the plurality of intermediate machine learning models and the aggregate machine learning model enables analysis of different data types; (Abstract; Fig. 3; [0079] the outputs of the intermediate models in the training are used to train the fusion (“aggregate”) models (each layer’s output trains the next layer); [0031], the base models can be used to produce domain-specific outputs (it is noted that the example described in this paragraph are not limiting and they are only examples of how the ensemble learning system can be used, see also [0051]); [0029]; [0076]; [0077], the base models (and associated learners) are trained on different data to be specialized to specific data (“…trained using the training data for which the other base learners are weaker…”, “…choose the training samples for S.sub.i from D that minimize the mutual information between S.sub.i and the previously-generated model(s)…”); [0059]; , output generated by the base models are not necessarily consistent and the fusion model combines them (different data outputs); Fig. 4; [0005]; [0059]; etc., shows the hierarchical structure of models and outputs/inputs)
storing the plurality of intermediate machine learning models and the aggregate machine learning model in a memory; (see at least [0006]; [0010]; [0013], the models are performed by processors that include memory and functions of the processor perform s functions using software, programs, code, etc. store di the memory, one of ordinary skill in the art would understand that trained models (representing software, etc.) would be stored in the memory for future use as functions of the processor)
accessing, by the one or more computer processors, information (Abstract; Claim 9; etc., input data is entered into the models)
analyzing, by the one or more computer processors using the plurality of intermediate machine learning models, resulting in a plurality of intermediate outputs, wherein each intermediate output is a domain-specific output corresponding to the respective data type on which the corresponding intermediate machine learning model was trained; (Abstract; Fig. 4; [0005]; [0059]; [0063]; [0079]; Claim 1; shows the hierarchical structure of models and outputs/inputs, “…The subsequent layer of learners can process the data, generate associated models, and provide additional data to another subsequent layer of learners. The last layer of the machine learning system may typically include a single learner that generates a final fusion model for the hierarchy. The resulting hierarchical arrangement of models can then be deployed for use in analyzing input data 402.”, after training the model can be used to analyze input data, including intermediate and aggregated outputs; [0031], the base models can be used to produce domain-specific outputs (it is noted that the example described in this paragraph are not limiting and they are only examples of how the ensemble learning system can be used, see also [0051]); [0029]; [0076]; [0077], the base models (and associated learners) are trained on different data to be specialized to specific data (“…trained using the training data for which the other base learners are weaker…”, “…choose the training samples for S.sub.i from D that minimize the mutual information between S.sub.i and the previously-generated model(s)…”); [0059]; , output generated by the base models are not necessarily consistent and the fusion model combines them (different data outputs))
analyzing, by the one or more computer processors using the aggregate machine learning model, the plurality of intermediate outputs, comprising the plurality of domain-specific outputs, to generate a unified output comprising (i) a prediction; (Abstract; Fig. 4; [0005]; [0059]; [0063]; [0079]; Claim 1, shows the hierarchical structure of models and outputs/inputs, in which the domain specific outputs of the base models are fed into the fusion model for combination (unified output); [0063]; Claim 2, the output can include a prediction)
Moazzami discloses the above system/method for training and using a machine learning models, wherein the models are arranged as a set of intermediate models that analyze different types of data and those results are fed into an aggregating model in order to combined and produce a prediction result. Moazzami does not explicitly disclose that the training dataset comprises (i) a training set of testing, inspection, and/or certification (TIC) data associated with a test set of suppliers each being associated with a test set of products, (ii) a training set of transactional data associated with the test set of suppliers, and (iii) a training set of customer data associated with the test set of suppliers; that the analyzed information is associated with a supplier, the information comprising (i) a set of TIC data associated with a set of products offered by the supplier, and (ii) a set of transactional data associated with the set of products; or that the analyzed information comprises [a] performance prediction for the supplier.
However, Dhingra teaches the use of test data related to suppliers with related product, customer, and transactional data for training artificial intelligence models and inputting information related to suppliers’ products and transaction for making predictions and recommendations (see a least [0050], database 402 includes supplier, customer, and transactional data; [0048]-[0050], the data in database 402 is used to plan and forecast; [0047], “…repository 210 includes an internal information database 402…”; [0042], the data in repository 210 is used for training the models; [0007]; [0038]; [0042], the data from repository 210 is used with machine learning models to make forecasts (predictions)).
It would have been obvious to one of ordinary skill in the art, before to the effective filing date of the claimed invention, to have further modified the system of Moazzami so as to have included the use of test data related to suppliers with related product, customer, and transactional data for training artificial intelligence models; and inputting information related to suppliers’ products and transaction for making predictions and recommendations, as taught by Dhingra.
Moazzami discloses a “base” method/system uses multiple types of data for training and performing models, as shown above. Dhingra teaches a comparable method/system uses multiple types of data for training and performing models, as shown above. Dhingra also teaches an embodiment in which test data related to suppliers with related product, customer, and transactional data is used for training artificial intelligence models; and information related to suppliers’ products and transaction is input for making predictions and recommendations, as shown above. One of ordinary skill in the art would have recognized the adaptation of the use of test data related to suppliers with related product, customer, and transactional data for training artificial intelligence models; and inputting information related to suppliers’ products and transaction for making predictions and recommendations to Moazzami could be performed with the technical expertise demonstrated in the applied references. (See KSR [127 S Ct. at 1739] "The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.")
Moazzami/Dhingra discloses the above system/method for training and using the particular arrangement of machine learning models (including intermediate and aggregate) and the use of test/training data related to suppliers with related product, customer, and transactional data. Moazzami/Dhingra does not explicitly disclose that information indicative of implementation of the recommendation is used to update the model(s). However, Dickens teaches:
(i) a performance prediction for the supplier and (ii) a recommendation related to a lifecycle of at least one product of the set of products offered by the supplier ([0031]; [0034]; [0042]; etc., the rankings indicate performance predictions for the different products (solutions) offered by the supplier(s)/resource(s), the various solutions provide recommendations for related to the deployment (part of the lifecycle of the product)
accessing, by the one or more computer processors, information indicative of implementation of the recommendation by the supplier, the information comprising at least a set of transactional data associated with the at least one product; ([0044]; [0045], the results of the selected cloud-based storage solution (configuration, deployment, resources) including how it is implemented (“…monitoring the network deployment to learn how users are implementing the deployments and using the recommended configurations to thereby improve future recommendations…”) and used on at least one subsequent input analysis (“…monitoring the network deployment to learn how users are implementing the deployments and using the recommended configurations to thereby improve future recommendations (e.g., subsequent iterations of…”), the selected solution represents a recommendation by the supplier regarding a product (the storage solution configuration, deployment, resources, etc. representing a product provided by the supplier), see also [[0017]-[0019]; [0029]-[0031])
updating, by the one or more computer processors, at least one machine learning model using the information, to reflect how the recommendation was employed, wherein the at least one machine learning model that was updated is used in at least one subsequent input data analysis ([0044]; [0045], the results of the selected cloud-based storage solution (configuration, deployment, resources) including how it is implemented (“…monitoring the network deployment to learn how users are implementing the deployments and using the recommended configurations to thereby improve future recommendations (e.g., subsequent iterations of…”), and used on at least one subsequent input analysis (“…monitoring the network deployment to learn how users are implementing the deployments and using the recommended configurations to thereby improve future recommendations…”), the selected solution represents a recommendation by the supplier regarding a product (the storage solution configuration, deployment, resources, etc. representing a product provided by the supplier) and the selecting and implementing of the solution represents a transaction with the supplier, see also [[0017]-[0019]; [0029]-[0031])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Moazzami/Dhingra so as to have included (i) a performance prediction for the supplier and (ii) a recommendation related to a lifecycle of at least one product of the set of products offered by the supplier; updating, by the one or more computer processors, at least one machine learning model using the information, to reflect how the recommendation was employed, wherein the at least one machine learning model that was updated is used in at least one subsequent input data analysis; and updating, by the one or more computer processors, at least one machine learning model using the information, to reflect how the recommendation was employed, wherein the at least one machine learning model that was updated is used in at least one subsequent input data analysis, as taught by Dickens in order to ensure that the best recommendations are made by ensuring that the models are optimized and improved by real-world results (Dickens, [0014]; [0030]; [0045], “…to thereby improve future recommendations…”).
In regards to Claims 3, 10, and 17, Moazzami discloses the above system/method for training and using a machine learning models, including intermediate outputs, as described above. Moazzami does not explicitly disclose, but Dhingra teaches:
outputting, by [a] machine learning model, information indicating a set of changes related to a development of the at least one product or a supply chain associated with the supplier ([0038], forecast includes positive or negative predictions regarding multiple factors, including demand, inventory, production, materials, price, sales, revenue; [0055], recommendations are provided for mitigating forecast parameters, including improving negative forecasts (which can include parameters related to inventory, production, materials which would be related to supply chain or product development))
It would have been obvious to one of ordinary skill in the art, before to the effective filing date of the claimed invention, to have further modified the system of Moazzami so as to have included outputting, by [a] machine learning model, information indicating a set of changes related to a development of the product or a supply chain associated with the supplier, as taught by Dhingra.
Moazzami discloses a “base” method/system uses multiple types of data for training and performing models to make predictions, as shown above. Dhingra teaches a comparable method/system uses multiple types of data for training and performing models to make predictions, as shown above. Dhingra also teaches an embodiment in which information indicating a set of changes related to a development of the product or a supply chain associated with the supplier is output by the machine learning model, as shown above. One of ordinary skill in the art would have recognized the adaptation of outputting, by [a] machine learning model, information indicating a set of changes related to a development of the product or a supply chain associated with the supplier to Moazzami could be performed with the technical expertise demonstrated in the applied references. (See KSR [127 S Ct. at 1739] "The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.")
In regards to Claims 4, 11, and 18, Moazzami/Dhingra discloses all of the above limitations. While Moazzami/Dhingra discloses a method for analyzing supplier, product, customer, and transaction data, Moazzami/Dhingra does not disclose that the training data includes a set of testing reports, a set of inspection reports, a set of certification reports, or regulatory data.
However, the Examiner asserts that the data identifying the training data as including a set of testing reports, a set of inspection reports, a set of certification reports, or regulatory data is simply a label for the data and adds little, if anything, to the claimed acts or steps and thus does not serve to distinguish over the prior art. Any differences related merely to the meaning and information conveyed through labels (i.e., the specific type of training data sources) which does not explicitly alter or impact the steps of the method does not patentably distinguish the claimed invention from the prior art in terms of patentability.
Therefore, it would have been obvious to a person of ordinary skill in the art at the time the invention was made to have the cited types of training data be included in training data Moazzami/Dhingra because the type of data or data source does not functionally alter or relate to the steps of the method and merely labeling the information differently from that in the prior art does not patentably distinguish the claimed invention.
In regards to Claims 5, 12, and 19, Moazzami/Dhingra discloses all of the above limitations. While Moazzami/Dhingra discloses a method for analyzing supplier, product, customer, and transaction data, Moazzami/Dhingra does not disclose that the transactional data includes sales amounts, sales quantities, returns amounts, or returns quantities.
However, the Examiner asserts that the data identifying the transactional data as including sales amounts, sales quantities, returns amounts, or returns quantities is simply a label for the data and adds little, if anything, to the claimed acts or steps and thus does not serve to distinguish over the prior art. Any differences related merely to the meaning and information conveyed through labels (i.e., the specific type of transactional data) which does not explicitly alter or impact the steps of the method does not patentably distinguish the claimed invention from the prior art in terms of patentability.
Therefore, it would have been obvious to a person of ordinary skill in the art at the time the invention was made to have the cited types of transactional data be included in transactional data of Moazzami/Dhingra because the type of data or data source does not functionally alter or relate to the steps of the method and merely labeling the information differently from that in the prior art does not patentably distinguish the claimed invention.
In regards to Claim 6, 13, and 20, Moazzami/Dhingra discloses all of the above limitations. While Moazzami/Dhingra discloses a method for analyzing supplier, product, customer, and transaction data, Moazzami/Dhingra does not disclose that the customer data includes reviews information, complaints information, or ratings information.
However, the Examiner asserts that the data identifying the customer data as including reviews information, complaints information, or ratings information is simply a label for the data and adds little, if anything, to the claimed acts or steps and thus does not serve to distinguish over the prior art. Any differences related merely to the meaning and information conveyed through labels (i.e., the specific type of customer data) which does not explicitly alter or impact the steps of the method does not patentably distinguish the claimed invention from the prior art in terms of patentability.
Therefore, it would have been obvious to a person of ordinary skill in the art at the time the invention was made to have the cited types of customer data be included in transactional data of Moazzami/Dhingra because the type of data or data source does not functionally alter or relate to the steps of the method and merely labeling the information differently from that in the prior art does not patentably distinguish the claimed invention.
Additional Relevant Prior Art Identified but not Relied Upon
Abbo et al. (WO 2016118979 A2). Discloses machine learning techniques for analyzing supplier data to make performance predictions (see at least [0448]; [0482]).
Capelo et al. (Pub. No. US 2022/0108223 A1). Discloses prediction models with multiple architectures and arrangements of machine learning techniques, including aggerating sub-models (see at least [0039]).
Lah (Patent No. US 11,494,721 B1). Discloses a recommendation system that includes updating the model(s) based on feedback pertaining to results usage (feedback loop) (see at least Abstract; column 1, SUMMARY OF THE INVENTION, paragraph 1; column 3, paragraph 4; column 17, paragraph 3 ).
Mayr et al. (Pub. No. US 20200090314 A1). Discloses machine learning techniques for analyzing TIC (see at least [0002]; [0040]; [0057]).
Maurice (Pub. No. US 2023/0179507 A1). Discloses a unified recommendation engine including aggerating submodel (see at least [0001]; 0014]-[0016]).
Medina et al. (Pub. No. US 2019/0087772 A1). Discloses machine learning techniques for analyzing data (such as supplier, customer, product transaction, etc.) to make predictions regarding sales and recommendations (see at least [0128]; [0152]).
Raghu et al. (Pub. No. US 2021/0081895 A1). Discloses machine learning techniques for analyzing TIC (see at least [0005]; [0023]).
Response to Arguments
Applicant’s arguments filed 12/18/2025 have been fully considered but they are not persuasive.
I. Rejection of Claims under 35 U.S.C. §101:
Applicant argues that the claimed invention provides an improvement to the technological field and that it addresses a problem in the field. Applicant asserts that the claims address a technological problem because “training and use of the machine learning models enables the systems and methods to process large datasets that the existing systems are unable to analyze as a whole" and states that this results in “improved processing time by the systems…reduce the overall amount of data retrieval and communication necessary for the analyses of marketplace data, reducing traffic bandwidth…”. However, the specification (including the cited paragraphs) only provides assertions to these alleged improvements or benefits. Applicant fails to provide evidence to demonstrate how the alleged improvement is achieved in a meaningful manner. For example, Applicant fails to provide any background or evidence to demonstrate that existing systems are unable to analyze/process large datasets as a whole, such as how/why they are deficient and/or how the particular hierarchy of models recited would address this specific deficiency. Nor does the specification demonstrate how/why the particular hierarchy of models would improve the functioning of the computer, such as improve processing time, reduce the overall amount of data retrieval and communication necessary for the analyses of marketplace data, and/or reducing traffic bandwidth, including in regards to existing systems. Applicant merely asserts that these alleged problems exist and that the claimed invention provides the alleged solutions and benefits.
See MPEP 2106.05(a), Improvements to the Functioning of a Computer or To Any Other Technology or Technical Field (“If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology.”).
Applicant argues that the claimed invention amounts to an inventive concept because machine learning models provide a specific technical architecture and are not used in their expected functions. However, merely because the models use different data types and are arranged in a specific manner does not demonstrate that they are not used in their expected manner. In the instant case the models are merely trained on data and used to receive input, and produce output, as is expected from a model. The fact that the claims recite specific data does not change that they are merely a tool for processing that data in the way a model is expected to. The models and the recitation of being trained and used on different data types is recited at a high level of generality and merely states that the models are used for different data types without providing anything significant to demonstrate that the models represent anything beyond a generic tool. Additionally, using the specific hierarchy/architecture also does not represent anything beyond a generic tool because it’s merely a series of models feeding output into another model that performs modelling activities, regardless of what data types are input/output.
It is also noted that the rejections do not (and have not) stated that this specific
combination of elements was well-understood, routine, or conventional and had has not relied upon this rationale, therefore no Berkheimer evidence has been provided.
The machine learning models are tools used in their expected function, such as intaking data and producing an output, demonstrating that they can be performed using generic or general-purpose technology. The fact that they perform expected functions, does not mean that the office action is indicating that the combination of elements was well-understood, routine, or conventional.
Related remarks from the previous office action are provided here for reference:
Additionally…the fact that each ML is performed on a different type of data does not significantly affect the functioning or activities performed. It is not clear that the use of different ML models on different data types is significantly different than using one ML (or another arrangement) on the data. For example, there is no evidence to demonstrate that the particular arrangement would provide any significant improvement such as increased efficiency, increased speed, improved accuracy, or any other benefit. Additionally, the type of data used in the claim merely applies the claimed invention to a particular environment and there is no evidence that the ML recommendation system would perform in a significantly different manner if used in a different environment with different input dat.
II. Rejection of Claims under 35 U.S.C. §103:
Applicant argues that Moazzami does not train the intermediate models of different data types (specifically the data types recited in the claims) and that Moazzami trains the models on the same triaging data. However, Moazzami uses subsets of the training data to train respective base (intermediate) models for different specializations for different aspects of a process (for example, minimize the mutual information between the models). Even if they are trained on data from the same data set, that does not mean that data set does not have different data types. Nor does training/performing the models on a single problem space indicate that all data types are heterogeneous or that it uses a different hierarchy.
Regarding the specified data categories recited in the claims and referenced in Applicant’s remarks, Moazzami is not applied to demonstrate specific data types, as they are addressed through the combination of references. Likewise, Dhingra is not relied upon to disclose the particular models or arrangement of models.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAUN D SENSENIG whose telephone number is (571)270-5393. The examiner can normally be reached M-F: 10:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached on 571-272-6872. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users.
To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.D.S/Examiner, Art Unit 3629 February 7, 2026
/NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626