DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-20 are pending and are examined herein.
Claims 1-20 are rejected under 35 USC 101 as being directed to an abstract idea without significantly more.
Claims 1-6, 9, 11-14, and 16-19 are rejected under 35 USC 103.
Claims 1-7, 9, 11-14, and 16-19 are rejected on the grounds of Non-statutory Double Patenting.
Examiner Remarks
This Office action includes a Non-statutory Double Patenting rejection. Please note that MPEP § 804 states:
A complete response to a nonstatutory double patenting (NSDP) rejection is either a reply by applicant showing that the claims subject to the rejection are patentably distinct from the reference claims or the filing of a terminal disclaimer in accordance with 37 CFR 1.321 in the pending application(s) with a reply to the Office action (see MPEP § 1490 for a discussion of terminal disclaimers). Such a response is required even when the nonstatutory double patenting rejection is provisional. As filing a terminal disclaimer, or filing a showing that the claims subject to the rejection are patentably distinct from the reference application’s claims, is necessary for further consideration of the rejection of the claims, such a filing should not be held in abeyance. Only objections or requirements as to form not necessary for further consideration of the claims may be held in abeyance until allowable subject matter is indicated. Replies with an omission should be treated as provided in MPEP § 714.03.
Information Disclosure Statement
The attached information disclosure statement(s) (IDS) is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the attached information disclosure statement(s) is/are being considered by the examiner.
Claim Rejections - 35 USC § 101 – Abstract Idea
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis
Each of the claims fall within one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter).
Step 2 Analysis
Claim 1 includes the following recitation of an abstract idea:
determining, ..., a risk indicator for a target entity from predictor variables associated with the target entity, wherein the risk indicator indicates a level of risk associated with the target entity, (This is practical to perform in the human mind under its broadest reasonable interpretation. This is a recitation of a mental process.)
...performing iterative adjustments of parameters of the neural network model to minimize a loss function of the neural network model subject to a path constraint, the path constraint requiring monotonicity in a relationship between (i) values of each common factor of the predictor variables from the training vectors and (ii) the training outputs of the training vectors, the relationship defined by the loading coefficients and the parameters of the neural network model; and (This is a recitation of a mathematical concept.)
generating, for the target entity, explanatory data indicating relationships between changes in the risk indicator and changes in at least some of the common factors; and (This is practical to perform in the human mind under its broadest reasonable interpretation. This is a recitation of a mental process.)
Claim 1 recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea:
A method that includes one or more processing devices performing operations comprising: (This is a high level recitation of generic computer components for performing the abstract idea. This does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).)
... using a neural network model trained using a training process... wherein the training process includes operations comprising: (This is a high level recitation of generic computer components for performing the abstract idea. This does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).)
accessing training vectors having elements representing training predictor variables and training outputs, wherein a particular training vector comprises particular values for the predictor variables, respectively, and a particular training output corresponding to the particular values, obtaining loading coefficients of common factors of the training predictor variables in the training vectors, and (This is a recitation of using data of a particular type or source to perform the abstract idea. This is an attempt to limit the abstract idea to a particular field of use or technological environment. See MPEP 2106.05(h).)
...transmitting, to a remote computing device, a responsive message including at least the risk indicator for use in controlling access of the target entity to one or more interactive computing environments. (This is insignificant extra-solution activity. See MPEP 2106.05(g). Moreover, sending or receiving data is well-understood, routine, conventional as evidenced by the court cases cited at MPEP 2106.05(d), example i. Receiving or transmitting data.)
Claim 1 does not reflect an improvement to computer technology or any other technology.
Claim 2 recites at least the abstract idea identified above in the claim upon which it depends.
Claim 2 recites the following additional elements which, considered individually and as an ordered combination with the additional elements from the claim upon which it depends, do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea:
wherein the neural network model comprises at least an input layer, one or more hidden layers, and an output layer, and wherein the parameters for the neural network model comprise weights of connections among the input layer, the one or more hidden layers, and the output layer. (This is a high level recitation of generic computer components for performing the abstract idea. This does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).)
Claim 2 does not reflect an improvement to computer technology or any other technology.
Claim 3 recites at least the abstract idea identified above in the claim upon which it depends, and further recites
prior to performing the iterative adjustments of the parameters of the neural network model: calculating a transform matrix by decomposing a loading matrix formed by the loading coefficients of the common factors of the training predictor variables; and transforming the training predictor variables by applying the transform matrix to the training predictor variables. (This is a recitation of a mathematical concept.)
Claim 3 does not recite further additional elements which might integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Claim 3 does not reflect an improvement to computer technology or any other technology.
Claim 4 recites at least the abstract idea identified above in the claim upon which it depends, and further recites
wherein an iterative adjustment comprises setting the weights of connections among the one or more hidden layers and the output layer that are negative to zero. (This is a recitation of a mathematical concept.)
Claim 4 does not recite further additional elements which might integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Claim 4 does not reflect an improvement to computer technology or any other technology.
Claim 5 recites at least the abstract idea identified above in the claim upon which it depends, and further recites
identifying a subset of the weights of connections between the input layer and a first hidden layer of the one or more hidden layers; and (This is practical to perform in the human mind under its broadest reasonable interpretation. This is a recitation of a mental process.)
setting a negative weight in the subset of the weights of connections to zero. (This is a recitation of a mathematical concept.)
Claim 5 does not recite further additional elements which might integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Claim 5 does not reflect an improvement to computer technology or any other technology.
Claim 6 recites at least the abstract idea identified above in the claim upon which it depends, and further recites
wherein an iterative adjustment comprises adjusting the parameters of the neural network model so that a value of a modified loss function in a current iteration is smaller than the value of the modified loss function in another iteration, and wherein the modified loss function comprises the loss function of the neural network model and the path constraint. (This is a recitation of a mathematical concept.)
Claim 6 does not recite further additional elements which might integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Claim 6 does not reflect an improvement to computer technology or any other technology.
Claim 7 recites at least the abstract idea identified above in the claim upon which it depends, and further recites
wherein the path constraint is added into the modified loss function through a hyperparameter, and (This is a recitation of a mathematical concept.)
...setting the hyperparameter to a random initial value prior to performing the iterative adjustments; (This is a recitation of a mathematical concept.)
in the iterative adjustment, determining a value of the loss function of the neural network model and (This is a recitation of a mathematical concept.)
a number of paths violating the path constraint based on a particular set of parameter values associated with the random initial value of the hyperparameter; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components. This is a recitation of a mental process. This is also a recitation of a mathematical concept.)
determining that the value of the loss function is greater than a threshold loss function value and that the number of paths violating the path constraint is zero; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components. This is a recitation of a mental process. This is also a recitation of a mathematical concept.)
updating the hyperparameter by decrementing the value of the hyperparameter; and (This is a recitation of a mathematical concept.)
determining an additional set of parameter values for the neural network model based on the updated hyperparameter. (This is practical to perform in the human mind under its broadest reasonable interpretation. This is a recitation of a mental process.)
Claim 7 recites the following additional elements which, considered individually and as an ordered combination with the additional elements from the claim upon which it depends, do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea:
... wherein training the neural network model further comprises: (This is a high level recitation of generic computer components for performing the abstract idea. This does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).)
Claim 7 does not reflect an improvement to computer technology or any other technology.
Claim 8 recites at least the abstract idea identified above in the claim upon which it depends, and further recites
in the iterative adjustment, determining a value of the loss function of the neural network model and (This is a recitation of a mathematical concept.)
a number of paths violating the path constraint based on the particular set of parameter values associated with the random initial value of the hyperparameter; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components. This is a recitation of a mental process. This is also a recitation of a mathematical concept.)
determining that the value of the loss function is lower than a threshold loss function value and that the number of paths violating the path constraint is non-zero; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components. This is a recitation of a mental process. This is also a recitation of a mathematical concept.)
updating the hyperparameter by incrementing the value of the hyperparameter; and (This is a recitation of a mathematical concept.)
determining a second additional set of parameter values for the neural network model based on the updated hyperparameter. (This is practical to perform in the human mind under its broadest reasonable interpretation. This is a recitation of a mental process.)
Claim 8 does not recite further additional elements which might integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Claim 8 does not reflect an improvement to computer technology or any other technology.
Claim 9 recites at least the abstract idea identified above in the claim upon which it depends, and further recites
performing factor analysis on the training predictor variables to obtain the loading coefficients of the common factors of the training predictor variables, or (This is a recitation of a mathematical concept.)
Claim 9 recites the following additional elements which, considered individually and as an ordered combination with the additional elements from the claim upon which it depends, do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea:
receiving the loading coefficients of the common factors of the training predictor variables. (This is insignificant extra-solution activity. See MPEP 2106.05(g). Moreover, sending or receiving data is well-understood, routine, conventional as evidenced by the court cases cited at MPEP 2106.05(d), example i. Receiving or transmitting data.)
Claim 9 does not reflect an improvement to computer technology or any other technology.
Claim 10 recites at least the abstract idea identified above in the claim upon which it depends, and further recites
wherein performing the factor analysis on the training predictor variables comprises applying an expectation-maximization (EM) algorithm, where a maximization step of the EM algorithm is performed by applying a least absolute shrinkage and selection operator (LASSO) regression on the training predictor variables and the common factors by introducing an L1 norm of a loading matrix formed by the loading coefficients of the common factors to a loss function of the maximization step; and solving the maximization step by applying a closed-form solution of the LASSO regression. (This is a recitation of a mathematical concept.)
Claim 10 does not recite further additional elements which might integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Claim 10 does not reflect an improvement to computer technology or any other technology.
Claim 11 recites substantially similar subject matter to claim 1 including substantially the same abstract idea.
Claim 11 recites the following additional elements which, considered individually and as an ordered combination with the additional elements addressed above with respect to claim 1, do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea:
A system comprising: a processing device; and a memory device in which instructions executable by the processing device are stored for causing the processing device to: (This is a high level recitation of generic computer components for performing the abstract idea. This does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).)
Claim 11 does not reflect an improvement to computer technology or any other technology.
Regarding claims 12-15, the rejection of claim 11 is incorporated herein. Claims 12-15 recite substantially similar subject matter to claims 2, 4 (including its parent claim 3), 6, and 10, respectively, and are rejected with the same rationale.
Claim 16 recites substantially similar subject matter to claim 1 including substantially the same abstract idea.
Claim 16 recites the following additional elements which, considered individually and as an ordered combination with the additional elements addressed above with respect to claim 1, do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea:
A non-transitory computer-readable storage medium having program code that is executable by a processor device to cause a computing device to perform operations, the operations comprising: (This is a high level recitation of generic computer components for performing the abstract idea. This does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).)
Claim 16 does not reflect an improvement to computer technology or any other technology.
Regarding claims 17-20, the rejection of claim 16 is incorporated herein. Claims 17-20 recite substantially similar subject matter to claims 2, 4 (including its parent claim 3), 6, and 10, respectively, and are rejected with the same rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-5, 9, 11-13, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over “Turner” (WO 2018/084867 A1) in view of “Koottayi” (US 2018/0288063 A1).
Regarding claim 1, Turner teaches
A method that includes one or more processing devices performing operations comprising: (Turner, Abstract, [0032])
determining, using a neural network model trained using a training process, a risk indicator for a target entity from predictor variables associated with the target entity, wherein the risk indicator indicates a level of risk associated with the target entity, (Turner, [0014] describes outputting a risk indicator using an optimized/trained neural network. See also [0015-0018] for an overview of the model training. Also, [0018, 0042] indicates that the risk scores are for an entity such as an individual or business.)
wherein the training process includes operations comprising: accessing training vectors having elements representing training predictor variables and training outputs, wherein a particular training vector comprises particular values for the predictor variables, respectively, and a particular training output corresponding to the particular values, (Turner, [0002, 0027] describes the training data as including values of both predictor variables and response variables (i.e., output variables).)
obtaining loading coefficients of common factors of the training predictor variables in the training vectors, and (Turner, [0118-0119, 0129]. The loadings (also called factor loadings) correspond to the loading coefficients.)
performing iterative adjustments of parameters of the neural network model to minimize a loss function of the neural network model subject to a path constraint, the path constraint requiring monotonicity in a relationship between (i) values of each common factor of the predictor variables from the training vectors and (ii) the training outputs of the training vectors, the relationship defined by the loading coefficients and the parameters of the neural network model; and (Turner, [0014, 0016-0017, 0020-0021] describe iteratively adjusting the weights to increase an accuracy (mathematically equivalent to decreasing an inaccuracy) to satisfy monotonicity constraints representing a relationship between the predictor variables and the modeled outputs. The loading coefficients are described at [0118-0119, 0129]. [0131] describes use of the common factors and the relationship between the factors and the molded score Y (which depends on the weights of the neural network as shown in equation (7) in [0129]) in constraining the optimization of the model.)
generating, for the target entity, explanatory data indicating relationships between changes in the risk indicator and changes in at least some of the common factors; and (Turner, Abstract, [0005, 0079, 0085])
Turner does not appear to explicitly teach
transmitting, to a remote computing device, a responsive message including at least the risk indicator for use in controlling access of the target entity to one or more interactive computing environments.
However, Koottayi—directed to analogous art--teaches
transmitting, to a remote computing device, a responsive message including at least the risk indicator for use in controlling access of the target entity to one or more interactive computing environments. (Koottayi, [0006] describes transmitting a threat perception score based on a computed risk for a user to control access to a resource.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Turner by Koottayi because doing so allows for monitoring user access and detecting threats in real-time as described by Koottayi at [0006].
Regarding claim 2, the rejection of claim 1 is incorporated herein. Furthermore, Turner teaches
wherein the neural network model comprises at least an input layer, one or more hidden layers, and an output layer, and wherein the parameters for the neural network model comprise weights of connections among the input layer, the one or more hidden layers, and the output layer. (Turner, [0014, 0053], Figure 5)
Regarding claim 3, the rejection of claim 2 is incorporated herein. Furthermore, Turner teaches
wherein the training process includes further operations comprising, prior to performing the iterative adjustments of the parameters of the neural network model: calculating a transform matrix by decomposing a loading matrix formed by the loading coefficients of the common factors of the training predictor variables; and transforming the training predictor variables by applying the transform matrix to the training predictor variables. (Turner, [0118-0119, 0-129] describes transforming the predictor variables into intermediate values using the coefficients β11 through βnm. The matrix of coefficients β11 through βnm make up the components of an nxm matrix.)
Regarding claim 4, the rejection of claim 3 is incorporated herein. Furthermore, Turner teaches
wherein an iterative adjustment comprises setting the weights of connections among the one or more hidden layers and the output layer that are negative to zero. (Turner, [0078], Figure 3, step 314. The weights of factors identified as requiring a monotonicity constraint may be constrained to zero when they violate the non-negativity requirement.
Regarding claim 5, the rejection of claim 4 is incorporated herein. Furthermore, Turner teaches
wherein an iterative adjustment further comprises: identifying a subset of the weights of connections between the input layer and a first hidden layer of the one or more hidden layers; and setting a negative weight in the subset of the weights of connections to zero. (Turner, [0078], Figure 3, step 314. The weights of factors identified as requiring a monotonicity constraint may be constrained to zero when they violate the non-negativity requirement. A determination that a weight violates the non-negativity requirement is an identification of that weight.)
Regarding claim 9, the rejection of claim 1 is incorporated herein. Furthermore, Turner teaches
wherein obtaining loading coefficients of common factors of the training predictor variables in the training vectors comprises one or more of:
performing factor analysis on the training predictor variables to obtain the loading coefficients of the common factors of the training predictor variables, or (Turner, [0005, 0015, 0034])
receiving the loading coefficients of the common factors of the training predictor variables. (Turner, [0005, 0015, 0034]. [0034-0036] indicate that the data may also be received from other devices.)
Regarding claim 11, Turner teaches
A system comprising: a processing device; and a memory device in which instructions executable by the processing device are stored for causing the processing device to: (Figure 1, [0030-0032].)
The remainder of claim 11 is substantially similar to claim 1; claim 11 is rejected with the same rationale.
Regarding claims 12-13, the rejection of claim 11 is incorporated herein. Claims 12-13 recite substantially similar subject matter to claims 2 and 4 (including parent claim 3), respectively and are rejected with the same rationale.
Regarding claim 16, Turner teaches
A non-transitory computer-readable storage medium having program code that is executable by a processor device to cause a computing device to perform operations, the operations comprising: (Figure 1, [0030-0032].)
The remainder of claim 16 is substantially similar to claim 1; claim 16 is rejected with the same rationale.
Regarding claims 17-18, the rejection of claim 16 is incorporated herein. Claims 17-18 recite substantially similar subject matter to claims 2 and 4 (including parent claim 3), respectively and are rejected with the same rationale.
Claims 6, 14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over “Turner” (WO 2018/084867 A1) in view of “Koottayi” (US 2018/0288063 A1), further in view of “Gupta” (How to Incorporate Monotonicity in Deep Networks While Preserving Flexibility?”, arXiv:1909.10662v3).
Regarding claim 6, the rejection of claim 2 is incorporated herein. Furthermore, Turner teaches
wherein an iterative adjustment comprises adjusting the parameters of the neural network model so that a value of a ... loss function in a current iteration is smaller than the value of the ... loss function in another iteration (Turner, [0014, 0016-0017, 0020-0021] describe iteratively adjusting the weights to increase an accuracy (mathematically equivalent to decreasing an inaccuracy) to satisfy monotonicity constraints representing a relationship between the predictor variables and the modeled outputs.)
The combination of Turner and Koottayi does not appear to explicitly teach
a modified loss function ...wherein the modified loss function comprises the loss function of the neural network model and the path constraint.
However, Gupta—directed to analogous art--teaches
a modified loss function ...wherein the modified loss function comprises the loss function of the neural network model and the path constraint. (Gupta, Section 3 describes a loss function in equation (1) which could be sued to train a neural network that incorporate monotonic knowledge. The path constrain is represented by the summation term and the loss is represented by LNN.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Turner and Koottayi by Gupta because it allows for learning “differentiated individual trends and produces smoother conditional curves which are important for personalized decisions, while preserving the flexibility of deep networks” (Gupta, Abstract).
Regarding claims 14 and 19, the rejection of claims 12 and 17 are incorporated herein. Claims 14 and 19 recite substantially similar subject matter to claim 6 and are rejected with the same rationale.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-7, 9, 11-14, and 16-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 2, 2, 2, 4, 7, 1, 9, 10, 10, 13, 16, 20, 20, and 20, respectively, of U.S. Patent No. 11,468,315 in view of “Turner” (WO 2018/084867 A1).
The instant claims recite substantially overlapping claim language to those identified in the respective patented claims. The only differences are taught by Turner as applied in the rejection under 35 USC 103 presented above. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the cited claims by Turner as indicated because the techniques taught by Turner provide for performance improvement as described by Turner at [0019].
Claims 1-6, 9, 11-14, and 16-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 16, 20, 20, 20, 20, 17, 16, 10, 11, 11, 14, 1, 2, 2, and 4, respectively, of U.S. Patent No. 11,010,669 in view of “Turner” (WO 2018/084867 A1).
The instant claims recite substantially overlapping claim language to those identified in the respective patented claims. The only differences are taught by Turner as applied in the rejection under 35 USC 103 presented above. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the cited claims by Turner as indicated because the techniques taught by Turner provide for performance improvement as described by Turner at [0019].
Claims 1-7, 9, 11-14, and 16-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 2, 2, 2, 4, 7, 1, 9, 10, 10, 13, 16, 16, 16, and 17, respectively, of U.S. Patent No. 10,558,913 in view of “Turner” (WO 2018/084867 A1).
The instant claims recite substantially overlapping claim language to those identified in the respective patented claims. The only differences are taught by Turner as applied in the rejection under 35 USC 103 presented above. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the cited claims by Turner as indicated because the techniques taught by Turner provide for performance improvement as described by Turner at [0019].
Claim 1-6, 9, 11-14, and 16-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 16, 16, 16, 16, 16, 16, 16, 1, 7, 12, 12, 9, 15, 15, and 15, respectively, of U.S. Patent No. 11,868,891 in view of “Turner” (WO 2018/084867 A1).
The instant claims recite substantially overlapping claim language to those identified in the respective patented claims. The only differences are taught by Turner as applied in the rejection under 35 USC 103 presented above. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the cited claims by Turner as indicated because the techniques taught by Turner provide for performance improvement as described by Turner at [0019].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kennel (US 2021/0295175 A1) – [0084] describes using neural networks trained with monotonicity constraints to improve explainability.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Markus A Vasquez whose telephone number is (303)297-4432. The examiner can normally be reached Monday to Friday 9AM to 4PM PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached on (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARKUS A. VASQUEZ/ Primary Examiner, Art Unit 2121