Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Remarks
This Office Action is responsive to Applicants' Amendment filed on December 22, 2025, in which claims 11-15 and 17-25 are currently amended. Claims 11-25 are currently pending.
Response to Arguments
The previous rejections to claims 12-25 under 35 U.S.C. § 112(b) are hereby withdrawn, as necessitated by applicant's amendments and remarks made to the rejections. Similarly, all of the previous rejections to claims 11 under 35 U.S.C. § 112(b) are hereby withdrawn, as necessitated by applicant's amendments and remarks made to the rejections, with the exception of the amendment to “the physical implementation” which now recites “a physical implementation”. “A generation method comprising: […] a physical implementation” is ambiguous. The amended claim limitation introduces additional ambiguity further discussed below. For these reasons Examiner asserts that the rejection of claims 11-22 under 35 U.S.C. § 112(b) in view of “physical implementation of a function determined to obtain the decision support system” should be maintained.
Applicant’s arguments with respect to rejection of claims 11-25 under 35 U.S.C. 101 based on amendment have been considered, however, are not persuasive.
With respect to Applicant's arguments on p. 9 of the Remarks submitted 12/22/2025 that "The inclusion of the term "physical" with the term "implementation" makes it indisputable that the present claims cannot be carried by merely using mental steps", Examiner notes that the MPEP explicitly states that mental processes performed using the assistance of tools such as pencil and paper are not eligible subject matter (MPEP 2106.04(a)(2)(III)). Examiner believes that pencil and paper are physical implementations and the instant claims have not limited the physical implementation such that it would be unreasonable to interpret the scope of the claims as inclusive of pencil and paper. The MPEP even contemplates the use of generic computer systems as tools which can be used to perform mental processes and concludes that the use of a generic computer system to perform said mental process does not integrate the judicial exception into a practical application such that even if the "physical implementation" in the claims was narrowly construed as a computer system it would amount to mere instructions to apply the judicial exception using a generic computer (MPEP 2106.05(f)).
With respect to Applicant's arguments on p. 9 of the Remarks submitted 12/22/2025 that the instant claims are limited to "evaluation of the performance of a physical system", Examiner respectfully disagrees. Nothing in the instant claims limits the evaluation to a physical system.
With respect to Applicant's arguments on p. 10 of the Remarks submitted 12/22/2025 that their perceived improvement is "an improvement over prior processes" where the process is "a claimed process for manufacturing a system", Examiner respectfully disagrees. First, the claim is wholly directed towards a mental process which can be performed entirely in the mind. The mere instructions of generic computer components to perform the mental process does not integrate the judicial exception into a practical application (MPEP 2106.05(f)) and does not improve the generic computer components (2106.05(a) "It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements." MPEP 2106.07(a)(II) "employing well-known computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not integrate the exception into a practical application"). Second, Examiner notes that Applicant's arguments on p. 10 of the Remarks submitted 12/22/2025 do not provide objective evidence of an improvement but rather claim the idea of an outcome (MPEP 2106.05(a) "An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome."). Finally, there is no clear structure for the manufactured system such that the claimed system of manufacture described in claim 22 under BRI appears to be inclusive of transitory propagating signals which is explicitly reinforced by the instant specification at ([¶0035]). Examiner notes that signals-per-se are not patent eligible. For at least these reasons and those further described below Examiner asserts that it is reasonable and appropriate to maintain the rejection to claims 1-25 under 35 U.S.C. 101.
Applicant’s arguments with respect to rejection of claims 11-25 under 35 U.S.C. 101 based on amendment have been considered, however, are not persuasive.
With respect to Applicant's arguments on p. 11 of the Remarks submitted 12/22/2025 that "Pedro nowhere disclose or suggest any set of constraints to be respected by the neural network within the scope of the claimed invention", Examiner respectfully disagrees. As Applicant has not tied this argument to one of the instant claims Examiner is making the assumption that this argument is in view of claim 15 which provides a Markush group of optional constraints. One of the optional constraints is simply a range clamp "the output of the neural sub-network being comprised between a minimum value and a maximum value, the output of the neural sub-network being equal to the minimum value when all inputs of the neural sub-network are equal to the minimum value, and the output of the neural sub-network being equal to the maximum value when all the inputs of the neural sub-network are equal to the maximum value" which is anticipated by Pedro as highlighted in the Non-Final Office Action mailed 8/28/2025 ([Abstract] "An artificial neural network is constructed to approximate the decision-maker preferences" [p. 550] "The canonical multiattribute utility theory (MAUT) assumes that there exists a function U, denoted utility function, which represents the decision-maker preferences" [p. 554 §2.1] "The domain for Uˆ is inferred from the domain of the instance i of the decision-making problem, i.e., from the available alternatives Ai. The domain is defined as the box constructed considering the minimum and maximum values for the available alternatives Ai in each problem dimension" [p. 556] "each alternative as output to train the ANN ˆU which approximates U").
With respect to Applicant's arguments on p. 11 of the Remarks submitted 12/22/2025 that "Pedro nowhere discloses or suggests determining and physically implementing the determined function", Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections.
Examiner has interpreted the claim limitation as “a physical implementation of a function that is determined to obtain a decision support system” where “determined” modifies function, meaning the function has been selected or identified to achieve some goal. In other words the claim is interpreted as stating that the physical implementation is determined, the determination comprised by a method. This is clearly disclosed by Pedro who is explicitly directed towards decision-maker systems ([p. 562] "With this approximation, no more queries to the decision-maker are necessary in further instances of the same decision problem"), that is the “function” being “determined” from decision maker provided ordinal information. Pedro explicitly states that the ANN is constructed (interpreted as physically implemented) ([p. 561] “This paper proposed a methodology for the construction of a function that approximates the decision-maker preferences using a partial ranking procedure and an artificial neural network. This function approximates the decision-maker preferences in a specified domain, in which the ANN is trained”). For at least these reasons and those further detailed below Examiner asserts that it is reasonable and appropriate to maintain the prior art rejections in view of Pedro.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 11-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 11, “the generation method comprising: […] a physical implementation of a function determined to obtain the decision support system” is indefinite. First it’s indefinite how a method could comprise a physical implementation. One of ordinary skill in the art could not reasonably determine what constitutes a physical implementation and how it could be performed as a method, or when the function is “determined”. Second, the limitation itself is syntactically ambiguous. The claim limitation could be read as “a physical implementation of a function, where the physical implementation is determined to obtain a decision support system” or “a physical implementation of a function, where the function is determined to obtain a decision support system”. Both readings are grammatically plausible and materially different in scope. Since the scope of the claim cannot be reasonably determined the claim is seen as indefinite. In the interest of further examination the claim is interpreted as stating that the physical implementation is determined, the determination comprised by a method.
Regarding claim 11, "the group consisting of:" lacks antecedent basis. "A group consisting of" is recommended.
Regarding claim 15, "the group consisting of" lacks antecedent basis. Examiner notes that there is a "group consisting of" introduced in claim 11, however, this group contains completely different elements such that it does not provide antecedent basis for a "group consisting of: a monotonicity of [...]" in claim 15. "A group consisting of" is recommended.
Regarding claim 20, "the group consisting of batch gradient descent, [...]" lacks antecedent basis. "A group consisting of batch gradient descent, [...]" is recommended.
Regarding claim 23, "the group consisting of:" lacks antecedent basis. "A group consisting of" is recommended.
The remaining claims are rejected with respect to their dependence on the rejected claims.
Claim Rejections - 35 USC § 101
101 Rejection
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 11-25 are rejected under 35 USC § 101 because the claimed invention is directed to non-statutory subject matter.
Regarding Claim 11: Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 11 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: Claim 11 under its broadest reasonable interpretation is a series of mental processes. For example, but for the generic computer components language, the above limitations in the context of this claim encompass neural network processing, including the following:
generating a multiple-criteria decision support system (observation, evaluation, and judgement)
a provision of an initial problem and training data solving the initial problem for particular cases, the initial problem being a problem of evaluating the quality of an existing system or of a system to be created, where the initial problem is a problem chosen from: a choice of a best alternative among a set of alternatives, a distribution of alternatives among preference classes, a storage of alternatives in order of preference, and the provision of an evaluation score of an alternative (observation, evaluation, and judgement),
a transcription of the initial problem in a form of a neural network and of a set of constraints to be satisfied by the neural network, so as to obtain a transcribed neural network (observation, evaluation and judgement)
determination of the function performed by the trained neural network (observation, evaluation, and judgement)
a physical implementation of the function determined to obtain a decision support system (observation, evaluation, and judgement. The claim is interpreted as stating that the physical implementation is determined, the determination comprised by a method.)
Therefore, claim 11 recites an abstract idea which is a judicial exception.
Step 2A Prong Two Analysis: Claim 11 recites additional elements “training of the transcribed neural network using training data, so as to obtain a trained neural network solving the initial problem”. However, these additional features are computer components recited at a high-level of generality, such that they amount to no more than mere instructions to apply the judicial exception using a generic computer component. An additional element that merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, does not integrate the judicial exception into a practical application (See MPEP 2106.05(f)). Therefore, claim 11 is directed to a judicial exception.
Step 2B Analysis: Claim 11 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the lack of integration of the abstract idea into a practical application, the additional elements recited in claim 11 amount to no more than mere instructions to apply the judicial exception using a generic computer component.
For the reasons above, claim 11 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to dependent claims 12-22. The additional limitations of the dependent claims are addressed briefly below:
Dependent claim 12 recites additional observation, evaluation, and judgement “wherein the transcribed neural network includes a set of neural sub-networks, the transcribing including a formulation of the set of constraints to be satisfied by the neural network in the form of sub-constraints to be satisfied by each neural sub-network.”
Dependent claim 13 recites additional observation, evaluation, and judgement “wherein each neural sub-network includes hidden layers, a number of hidden layers being less than or equal to 5.”
Dependent claim 14 recites additional observation, evaluation, and judgement “the number of hidden layers is less than or equal to 3”.
Dependent claim 15 recites additional observation, evaluation, and judgement “wherein the sub-constraints to be satisfied by a neural sub-network are selected from the group consisting of: a monotonicity of a variation of output of the neural sub-network as a function of inputs of the neural sub-network, the output of the neural sub-network being comprised between a minimum value and a maximum value, the output of the neural sub-network being equal to the minimum value when all inputs of the neural sub-network are equal to the minimum value, and the output of the neural sub-network being equal to the maximum value when all the inputs of the neural sub-network are equal to the maximum value, and each sub-network being suitable for implementing weights, one constraint being that the weights are positive and that a sum of the weights is equal to 1”.
Dependent claim 16 recites additional observation, evaluation, and judgement “wherein the transcribed neural network includes a set of neural sub-networks arranged in a tree structure, each neural sub-network being a first neural sub-network or a second neural sub-network, each first neural sub-network performing a respective aggregation function, and each second neural sub-network performing a respective marginal utility function”.
Dependent claim 17 recites additional observation, evaluation, and judgement and mathematical calculations and relationships “wherein the respective aggregation function is a variable aggregation function selected from the group consisting of: a weighted sum of variables, a Choquet integral, a 2-additive Choquet integral, a weighted sum of combinations of min and max functions between k variables, for k at least equal to 2, a multi-linear model, a generalized additive independence function, and ordered weighted average”
Dependent claim 18 recites additional observation, evaluation, and judgement “wherein the respective marginal utility function is a monotone function or a function having three parts, a monotone first part, a constant second part and a monotone third part, the monotonicity of the first part being different from the monotonicity of the third part”
Dependent claim 19 recites additional instructions to apply the judicial exception using generic computer components “wherein the training includes: a first training with the set of constraints of the transcription making the training of an intermediate neural network possible, a second training of the set of constraints by setting the neural network to the intermediate neural network, so as to obtain a trained set of constraints, and an adjustment of the trained neural network according to the difference between the set of constraints of the transcription and the trained set of constraints, so as to obtain an adjusted neural network, the trained neural network being the adjusted neural network” which recites generic neural network training
Dependent claim 20 recites additional mathematical calculations and relationships “wherein the training comprises employing at least one technique selected from the list consisting of batch gradient descent, stochastic gradient descent and mini-batch gradient descent”
Dependent claim 21 recites additional mathematical calculations and relationships “the training comprises the use of a weighted sum of sigmoids”
Dependent claim 22 recites additional observation, evaluation, and judgement “A decision support system generated by implementing a generation method according to claim 11”
Regarding claim 22, the decision support system under broadest reasonable interpretation is interpreted as being directed towards signals-per-se. This interpretation is supported by the instant specification at ([¶0035] "the computer 14 is an electronic computer suitable for handling and/or transforming data represented as electronic or physical quantities in registers of the computer 10 and/or memories into other similar data corresponding to physical data in the register memories or other types of displays, transmission devices or storage devices."). There is no indication that the decision support system cannot be a transitory signal, therefore the claim is rejected as signal-per-se.
Regarding Claim 23: Claim 23 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 23 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: Claim 23 under its broadest reasonable interpretation is a series of mental processes. For example, but for the generic computer components language, the above limitations in the context of this claim encompass neural network processing, including the following:
each first neural sub-network performing a respective aggregation function, the respective aggregation function being a variable aggregation function selected from the group consisting of: a weighted sum of the variables, a Choquet integral, a 2-additive Choquet integral, a weighted sum of combinations of min and max functions between k variables, for k at least equal to 2, a multi-linear model, a generalized additive independence function, and the ordered weighted average (observation, evaluation, and judgement and mathematical calculations and relationships)
each second neural sub-network performing a respective marginal utility function, the utility function preferentially being a monotone function or a function having three parts, a monotone first part, a constant second part and a monotone third part, the monotonicity of the first part being different from the monotonicity of the third part (observation, evaluation, and judgement),
Therefore, claim 23 recites an abstract idea which is a judicial exception.
Step 2A Prong Two Analysis: Claim 23 recites additional elements “a physical implementation of a neural network comprising a set of neural sub-networks arranged in a tree structure, each neural sub-network being a first neural sub-network or a second neural sub-network” (describes a generic neural network). However, these additional features are computer components recited at a high-level of generality, such that they amount to no more than mere instructions to apply the judicial exception using a generic computer component. An additional element that merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, does not integrate the judicial exception into a practical application (See MPEP 2106.05(f)). Therefore, claim 23 is directed to a judicial exception.
Step 2B Analysis: Claim 23 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the lack of integration of the abstract idea into a practical application, the additional elements recited in claim 23 amount to no more than mere instructions to apply the judicial exception using a generic computer component.
For the reasons above, claim 23 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to dependent claims 24-25. The additional limitations of the dependent claims are addressed briefly below:
Dependent claim 24 recites additional observation, evaluation, and judgement and mathematical calculations and relationships “wherein the respective aggregation function is a variable aggregation function selected from the list consisting of: a weighted sum of the variables, a Choquet integral, a 2-additive Choquet integral, a weighted sum of combinations of min and max functions between k variables, for k at least equal to 2, a multi-linear model, a generalized additive independence function, and the ordered weighted average.”
Dependent claim 25 recites additional observation, evaluation, and judgement “wherein the marginal utility function is a monotone function or a function having three parts, a monotone first part, a constant second part and a monotone third part, the monotonicity of the first part being different from the monotonicity of the third part.”
Therefore, when considering the elements separately and in combination, they do not add significantly more to the inventive concept. Accordingly, claims 11-25 are rejected under 35 U.S.C. § 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 11-15, 19, and 22 are rejected under U.S.C. §103 as being unpatentable over the combination of Pedro (“Modeling Decision-Maker Preferences through Utility Function Level Sets”, 2011).
Regarding claim 11, Pedro teaches A method for generating a multiple-criteria decision support system, the generation method comprising:([Abstract] "In this paper, we present a method based on the multi attribute utility theory to approximate the decision-maker preference function")
a provision of an initial problem ([p. 554 §2.1 Step 1] "In this domain, a fictitious decision-making problem is built, in which the alternatives are located as a grid. The queries to the DM are presented over this grid. The grid is constructed to find a representation for the utility function U in the desired domain. The number of alternatives in the grid is related to the quality of the approximation U" decision making problem interpreted as synonymous with initial problem.)
and training data solving the initial problem for particular cases, ([p. 558] "Figure 4 presents the normalized surface and level sets of function Uˆ, modeled by the ANN. Considering the grid of alternatives and creating 30 sets, each one with 50 random alternatives, the KTD average value in 30 runs was 0.02. These sets were created to find the KTD value in different sets, beyond the training set" [p. 556] "We use a multi-objective optimization approach to balance the training data error")
the initial problem being a problem of evaluating the quality of an existing system or of a system to be created, ([p. 554 §2.1 Step 1] "In this domain, a fictitious decision-making problem is built, in which the alternatives are located as a grid. The queries to the DM are presented over this grid. The grid is constructed to find a representation for the utility function U in the desired domain. The number of alternatives in the grid is related to the quality of the approximation U")
where the initial problem is a problem selected from the group consisting of: the choice of the best alternative among a set of alternatives, the distribution of alternatives among preference classes, the storage of alternatives in order of preference, and the provision of an evaluation score of an alternative,([p. 551 §1.1] "given a problem, with its set of possible solutions (the alternatives)" [p. 552 §1] "As a consequence, a regression of utility function values may be meaningful, and may help to guide a search for the preferred alternative from a set of alternatives" [p. 553 §1.1] "The value of each alternative is assigned by a decision-maker, that formally corresponds to a preference function P. The best alternative x∗ ∈ A is the one which has attributes f(x∗) ∈ B that maximize the function P(f(x)) = P(x) in the set A")
a transcription of the initial problem in a form of a neural network and of a set of constraints to be satisfied by the neural network, so as to obtain a transcribed neural network,([p. 555 §2.2 Step 2] "This procedure creates a partition of the set A in at least p disjunct subsets. As the number of pivots is less than the number of alternatives, many alternatives will have the same ranking, providing a partial sorting. A total sorting could be obtained through a total ranking, but these results would be useless for the purpose of building a regression model using ANNs. For this reason, a partial sorting is strictly necessary" [p. 555 §2.3 Step 3] "In this paper, the regression tool chosen is an artificial neural network (ANN)" partial sorting for ANN interpreted as synonymous with transcription in the form of a neural network.)
training of the transcribed neural network using the training data, so as to obtain a trained neural network solving the initial problem,([p. 566 §2.3 Step 3] "We use the alternatives within the grid as input and the ranking level of each alternative as output to train the ANN Uˆ which approximates U")
determination of the function performed by the trained neural network, and([p. 562] "Given a decision making problem, a partial ranking is built, leading to a partial sorting for the alternatives; - With a set of alternatives and the partial ranking process, the resulting function is obtained by the ANN")
a physical implementation of a function determined to obtain the decision support system.([p. 562] "With this approximation, no more queries to the decision-maker are necessary in further instances of the same decision problem").
Regarding claim 12, Pedro teaches The generation method according to claim 11, wherein the transcribed neural network includes a set of neural sub-networks, (Pedro [p. 556] "The ANN architecture chosen for the application was a multilayer perceptron (MLP), with one hidden layer with 30 neurons" Neurons interpreted as a neural sub-networks. Layer interpreted as set of neural sub-networks.)
the transcribing including a formulation of the set of constraints to be satisfied by the neural network in a form of sub-constraints to be satisfied by each neural sub-network.(Pedro [pp. 555-556 §2.3] "The key element of this paradigm is the structure of the information processing system, which is composed of a large number of highly interconnected processing elements (neurons) working together to solve specific problems").
Regarding claim 13, Pedro teaches The generation method according to claim 12, wherein each neural sub-network includes hidden layers, a number of hidden layers being less than or equal to 5.(Pedro [p. 556] "The ANN architecture chosen for the application was a multilayer perceptron (MLP), with one hidden layer with 30 neurons" Layer interpreted as set of neural sub-networks.).
Regarding claim 14, Pedro teaches The generation method according to claim 13, wherein the number of hidden layers is less than or equal to 3.(Pedro [p. 556] "The ANN architecture chosen for the application was a multilayer perceptron (MLP), with one hidden layer with 30 neurons").
Regarding claim 15, Pedro teaches The generation method according to claim 12, wherein the sub-constraints to be satisfied by a neural sub-network are selected from the group consisting of: a monotonicity of a variation of output of the neural sub-network as a function of inputs of the neural sub-network, the output of the neural sub-network being comprised between a minimum value and a maximum value, the output of the neural sub-network being equal to the minimum value when all inputs of the neural sub-network are equal to the minimum value, and the output of the neural sub-network being equal to the maximum value when all the inputs of the neural sub-network are equal to the maximum value, and each sub-network being suitable for implementing weights, one constraint being that the weights are positive and that the sum of the weights is equal to 1.(Pedro [p. 554 §2.1] "The domain for Uˆ is inferred from the domain of the instance i of the decision-making problem, i.e., from the available alternatives Ai. The domain is defined as the box constructed considering the minimum and maximum values for the available alternatives Ai in each problem dimension" Output layer neuron output interpreted as alternative which is explicitly constrained by a minimum and maximum.).
Regarding claim 19, Pedro teaches The generation method according to claim 11, wherein the training includes: a first training with the set of constraints of the transcription making training of an intermediate neural network possible, a second training of the set of constraints by setting the neural network to the intermediate neural network, so as to obtain a trained set of constraints, and an adjustment of the trained neural network according to a difference between the set of constraints of the transcription and the trained set of constraints, so as to obtain an adjusted neural network, the trained neural network being the adjusted neural network.(Pedro [p. 556 §2.3] "The ANN architecture chosen for the application was a multilayer perceptron (MLP), with one hidden layer with 30 neurons. We use a multi-objective optimization approach to balance the training data error and the weight vector norm, to avoid underfitting and overfitting. The selection of the most appropriate solution within the Pareto-optimal set is performed by minimum validation error" Error interpreted as difference between set of constraints of the transcription and the trained set of constraints.).
Regarding claim 22, Pedro teaches A decision support system generated by implementing a generation method according to claim 11.(Pedro [Abstract] "In this paper, we present a method based on the multi attribute utility theory to approximate the decision-maker preference function [...] Multicriteria decision analysis").
Claims 16-18 and 23-25 are rejected under U.S.C. §103 as being unpatentable over the combination of Pedro and Chen (“Integration of genetic algorithms and neural networks for the formation of the classifier of the hierarchical Choquet integral”, 2020).
Regarding claim 16, Pedro teaches The generation method according to claim 11, wherein the transcribed neural network includes a set of neural sub-networks arranged in a tree structure, each neural sub-network being a first neural sub-network or a second neural sub-network, (Pedro See neurons in FIG. 1 architecture which shows each neuron arranged in a "tree" structure with respect to the instant specification ([¶0169] "The tree structure is a tree structure when there is a single path between a particular vertex to all other vertices, and each non-leaf vertex has at least two child nodes.")).
However, Pedro doesn't explicitly teach each first neural sub-network performing a respective aggregation function, and
each second neural sub-network performing a respective marginal utility function.
Chen, in the same field of endeavor, teaches each first neural sub-network performing a respective aggregation function, and([p. 55 §4.6] "The network structure of the NN used is depicted in Fig. 5. The input layer is composed of sub-Choquet integrals, which are determined by GAs and HLMS. Then, we add neurons (e.g., hierarchical sub-Choquet integrals) and rectified linear unit (ReLU) activation functions in the hidden layers" [p. 47] "For all A, B E N, if A (B then m(A) < m(B) (monotonicity)" neuron sub-Choquet integral interpreted as subnetwork performing a respective aggregation function)
each second neural sub-network performing a respective marginal utility function.([p. 55 §4.6] "The network structure of the NN used is depicted in Fig. 5. The input layer is composed of sub-Choquet integrals, which are determined by GAs and HLMS. Then, we add neurons (e.g., hierarchical sub-Choquet integrals) and rectified linear unit (ReLU) activation functions in the hidden layers" [p. 47] "For all A, B E N, if A (B then m(A) < m(B) (monotonicity)" Relu is monotonic.).
Pedro as well as Chen are directed towards using neural networks for multi-criteria decision making. Therefore, Pedro as well as Chen are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Pedro with the teachings of Chen by using neurons for sub-Choquet integrals and applying a Relu activation function. Relu is a well-known and common activation function in the art which would have been obvious to use by one of ordinary skill in the art before the effective filing date of the claimed invention. This is explicitly reinforced by Chen who provides as additional motivation for combination ([p. 59] "the selected features are transformed to form different sub-Choquet integrals (which form the input of NN). Next, compared to the NN, the proposed model is explainable, as the neurons are viewed as the black box in NN. In contrast, the neurons here are represented by subChoquet integrals. Furthermore, the accuracy criterion also indicates that the proposed model was significantly better than the naïve Bayes and decision tree approaches. Finally, the objective of the proposed method is accuracy, which is more intuitive than others in the classification problem"). This motivation for combination also applies to the remaining claims which depend on this combination.
Regarding claim 17, the combination of Pedro, and Chen teaches The generation method according to claim 16, wherein the respective aggregation function is a variable aggregation function selected from the group consisting of: a weighted sum of variables, a Choquet integral, a 2-additive Choquet integral, a weighted sum of combinations of min and max functions between k variables, for k at least equal to 2, a multi-linear model, a generalized additive independence function, and an ordered weighted average.(Pedro [p. 551] "an aggregation of the multiple criteria is performed using some pre-defined aggregation function [...] It is worthy to notice that an important existing approach for the problem of representing non-linear dependencies between different criteria in decision problems is based on Choquet integrals" [...] Amongst MAUT-based methods, we can cite: Smarts and Smarter [6], Weighted Sum Model [7], Weighted Product Model [12]").
Regarding claim 18, the combination of Pedro, and Chen teaches The generation method according to claim 16, wherein the respective marginal utility function is a monotone function or a function having three parts, a monotone first part, a constant second part and a monotone third part, the monotonicity of the first part being different from the monotonicity of the third part.(Chen [p. 55 §4.6] "The network structure of the NN used is depicted in Fig. 5. The input layer is composed of sub-Choquet integrals, which are determined by GAs and HLMS. Then, we add neurons (e.g., hierarchical sub-Choquet integrals) and rectified linear unit (ReLU) activation functions in the hidden layers" [p. 47] "For all A, B E N, if A (B then m(A) < m(B) (monotonicity)" Relu is monotonic.).
Regarding claim 23, Pedro teaches comprising a set of neural sub-networks arranged in a tree structure, each neural sub-network being a first neural sub-network or a second neural sub-network, (See neurons in FIG. 1 architecture which shows each neuron arranged in a "tree" structure with respect to the instant specification ([¶0169] "The tree structure is a tree structure when there is a single path between a particular vertex to all other vertices, and each non-leaf vertex has at least two child nodes."))
each first neural sub-network performing a respective aggregation function, the respective aggregation function being a variable aggregation function selected from the group consisting of: a weighted sum of variables, a Choquet integral, a 2-additive Choquet integral, a weighted sum of combinations of min and max functions between k variables, for k at least equal to 2, a multi-linear model, a generalized additive independence function, and an ordered weighted average, and([p. 551] "an aggregation of the multiple criteria is performed using some pre-defined aggregation function [...] It is worthy to notice that an important existing approach for the problem of representing non-linear dependencies between different criteria in decision problems is based on Choquet integrals" [...] Amongst MAUT-based methods, we can cite: Smarts and Smarter [6], Weighted Sum Model [7], Weighted Product Model [12]").
However, Pedro does not explicitly teach A multiple-criteria decision support system comprising a physical implementation of a neural network
each second neural sub-network performing a respective marginal utility function, the utility function being a monotone function or a function having three parts, a monotone first part, a constant second part and a monotone third part, the monotonicity of the first part being different from the monotonicity of the third part.
Chen, in the same field of endeavor, teaches A multiple-criteria decision support system comprising a physical implementation of a neural network ([p. 57] "The experimental environment was an Intel eight-core desktop processor with an i7-7700 CPU at 3.60 GHz and with 32 GB memory")
each second neural sub-network performing a respective marginal utility function, the utility function being a monotone function or a function having three parts, a monotone first part, a constant second part and a monotone third part, the monotonicity of the first part being different from the monotonicity of the third part.([p. 55 §4.6] "The network structure of the NN used is depicted in Fig. 5. The input layer is composed of sub-Choquet integrals, which are determined by GAs and HLMS. Then, we add neurons (e.g., hierarchical sub-Choquet integrals) and rectified linear unit (ReLU) activation functions in the hidden layers" [p. 47] "For all A, B E N, if A (B then m(A) < m(B) (monotonicity)" Relu is monotonic.).
Pedro as well as Chen are directed towards using neural networks for multi-criteria decision making. Therefore, Pedro as well as Chen are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Pedro with the teachings of Chen by using neurons for sub-Choquet integrals and applying a Relu activation function. Relu is a well-known and common activation function in the art which would have been obvious to use by one of ordinary skill in the art before the effective filing date of the claimed invention. This is explicitly reinforced by Chen who provides as additional motivation for combination ([p. 59] "the selected features are transformed to form different sub-Choquet integrals (which form the input of NN). Next, compared to the NN, the proposed model is explainable, as the neurons are viewed as the black box in NN. In contrast, the neurons here are represented by subChoquet integrals. Furthermore, the accuracy criterion also indicates that the proposed model was significantly better than the naïve Bayes and decision tree approaches. Finally, the objective of the proposed method is accuracy, which is more intuitive than others in the classification problem"). This motivation for combination also applies to the remaining claims which depend on this combination.
Regarding claim 24, the combination of Pedro, and Chen teaches The multiple-criteria decision support system according to claim 23, wherein the respective aggregation function is a variable aggregation function selected from the group consisting of: a weighted sum of the variables, a Choquet integral, a 2-additive Choquet integral, a weighted sum of combinations of min and max functions between k variables, for k at least equal to 2, a multi-linear model, a generalized additive independence function, and the ordered weighted average. (Pedro [p. 551] "an aggregation of the multiple criteria is performed using some pre-defined aggregation function [...] It is worthy to notice that an important existing approach for the problem of representing non-linear dependencies between different criteria in decision problems is based on Choquet integrals" [...] Amongst MAUT-based methods, we can cite: Smarts and Smarter [6], Weighted Sum Model [7], Weighted Product Model [12]").
Regarding claim 25, the combination of Pedro, and Chen teaches The multiple-criteria decision support system according to claim 23, wherein the respective marginal utility function is a monotone function or a function having three parts, a monotone first part, a constant second part and a monotone third part, the monotonicity of the first part being different from the monotonicity of the third part.(Chen [p. 55 §4.6] "The network structure of the NN used is depicted in Fig. 5. The input layer is composed of sub-Choquet integrals, which are determined by GAs and HLMS. Then, we add neurons (e.g., hierarchical sub-Choquet integrals) and rectified linear unit (ReLU) activation functions in the hidden layers" [p. 47] "For all A, B E N, if A (B then m(A) < m(B) (monotonicity)" Relu is monotonic.).
Claims 20 and 21 are rejected under U.S.C. §103 as being unpatentable over the combination of Pedro and Guo (“An interpretable machine learning framework for modelling human decision behavior”, 2019).
Regarding claim 20, Pedro teaches The generation method of according to claim 11.
However, Pedro doesn't explicitly teach wherein the training comprises employing at least one technique selected from the group consisting of batch gradient descent, stochastic gradient descent and mini-batch gradient descent..
Guo, in the same field of endeavor, teaches the training comprises employing at least one technique selected from the group consisting of batch gradient descent, stochastic gradient descent and mini-batch gradient descent. ([p. 13 §3.2] "We can adopt a variety of optimization methods to minimize Eq.(8), such as Stochastic Gradient Descent (SGD)").
Pedro as well as Guo are directed towards using neural networks for multi-criteria decision making. Therefore, Pedro as well as Guo are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Pedro with the teachings of Guo by using stochastic gradient descent for training the neural network. Stochastic gradient descent is a common training algorithm which would have been obvious to use to one of ordinary skill in the art before the effective filing date of the claimed invention. This is explicitly reinforced by Guo who provides as additional motivation for combination ([p. 13 §3.2] "With the proposed model, the DM can know what attributes are more important for the prediction, what values of an attribute are positively or negatively associated to the prediction, and where the convexity and concavity of the function are changed"). This motivation for combination also applies to the remaining claims which depend on this combination.
Regarding claim 21, Pedro teaches The generation method according to claim 11.
However, Pedro doesn't explicitly teach wherein the training comprises using a weighted sum of sigmoids.
Guo, in the same field of endeavor, teaches The generation method according to claim 11, wherein the training comprises using a weighted sum of sigmoids. ([p. 13 §3.2] "where σ(·) is a sigmoid function. To estimate the parameters, we minimize the mean square error (MSE)" See Eqn. 7 and 8).
Pedro as well as Guo are directed towards using neural networks for multi-criteria decision making. Therefore, Pedro as well as Guo are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Pedro with the teachings of Guo by using stochastic gradient descent for training the neural network. Stochastic gradient descent is a common training algorithm which would have been obvious to use to one of ordinary skill in the art before the effective filing date of the claimed invention. This is explicitly reinforced by Guo who provides as additional motivation for combination ([p. 13 §3.2] "With the proposed model, the DM can know what attributes are more important for the prediction, what values of an attribute are positively or negatively associated to the prediction, and where the convexity and concavity of the function are changed"). This motivation for combination also applies to the remaining claims which depend on this combination.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY VINCENT BOSTWICK whose telephone number is (571)272-4720. The examiner can normally be reached M-F 7:30am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SIDNEY VINCENT BOSTWICK/Examiner, Art Unit 2124
/MIRANDA M HUANG/Supervisory Patent Examiner, Art Unit 2124