Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
2. Claims 1-20 are pending in Instant Application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07/24/2023 is in compliance with the
provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Priority
Examiner acknowledges that Pursuant to 35 U.S.C. §119(e), this application is entitled to and claims
the benefit of the filing date of U.S. Provisional App. No. 63/397,716 filed August 12, 2022, entitled “TECHNIQUES FOR RUNNING RELIABLE AND ROBUST CAUSAL INFERENCES AT SCALE”, the content of which is incorporated herein by reference in its entirety for all purposes.
Examiner note: claim 10 teaches “a first method is 1/number of methods” and claim 14 teaches “reliability metric is determined using 1/confidence score”. Examiner is not sure what “1/number” of methods and “1/confidence” score mean and the specification has no mention of it. It is assumed to be a typo however if it means something, it is requested that applicant’s representative clarify it.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-10 and 15-20 are rejected under 35 U.S.C. 102 (a) (2) as being anticipated by
US2023/0274132 issued to Cai et al. (Cai).
As per claim 1, Cai teaches a system comprising: a plurality of nodes configured to receive input data to calculate an effect of a variable on a group for a plurality of methods, wherein methods in the plurality of methods calculate the effect of the variable for the input data using different logic (Cai: ¶ 0065 - teaches the model includes internal parameters that guide how input data is transformed into output data, such as through a series of nodes and connections within the model to transform input data into output data; while ¶ 0013 - teaches machine learning (ML), deep learning (DL), and/or other artificial machine-driven logic, enables machines (e.g., computers, logic circuits, etc.) to use a model to process input data to generate an output based on patterns and/or associations previously learned by the model via a training process. For instance, the model may be trained with data to recognize patterns and/or associations and follow such patterns and/or associations when processing input data such that other input(s) result in output(s) consistent with the recognized patterns and/or associations); an intermediate weight computation system configured to: generate a plurality of sub-weights for methods in the plurality of methods (Cai: ¶ 0027 - the input data is also analyzed in a soft weight generation process to generate a plurality of soft weights. The number of soft weights generated through the soft weight generation process), wherein the sub-weights are generated based on a balance metric (Cai: ¶ 0017 - different normalization techniques have been developed in the past including batch normalization (BN) where BN techniques standardize the data using statistics estimated batch-wise (e.g., the mean and standard deviation are calculated across different batches in a subset (e.g., a mini-batch) of the available training dataset). So, it determines balance group (as defined in specification ¶ 0017)), a dissimilarity metric (Cai: ¶ 0017 - different normalization techniques have been developed in the past including group normalization (GN) where GN techniques divide the channels into groups and estimate the statistics for standardization of data within each group, thereby alleviating the sensitivity to batch size. So, it determines the similarity and dissimilarity between the group), and a reliability metric (Cai: ¶ 0017 - different normalization techniques have been developed in the past including instance normalization (IN) where IN techniques standardize the data using statistics estimated channel-wise (e.g., the mean and standard deviation are calculated across different channels in the mini-batch). So, by standardizing it makes the batch data reliable); combine the plurality of sub-weights for methods in the plurality of methods to generate a final weight for the methods; and apply the respective final weight to an intermediate result from a respective method in the plurality of methods to generate a weighted intermediate result for the method (Cai: ¶ 0027 - each one of the soft weights is associated with a corresponding normalization technique and defines the contribution of the corresponding normalization technique to a final normalized output. That is, the output of each normalization technique corresponds to one of multiple different alternate normalized outputs for the input data, each of which is to be used in calculating a final normalized output); and an integration system configured to combine weighted intermediate results for the plurality of methods to generate a final result for the effect of the variable (Cai: ¶ 0027 - different normalization technique is multiplied by its respective soft weight and then the outcomes are summed in a sum-of-products operation to produce a final normalized output of the DSN process flow).
As per claim 2, Cai teaches the system of claim 1, wherein the input data is analyzed to determine intermediate metrics, wherein the intermediate metrics are used to generate at least one of the sub-weights (Cai: ¶ 0073 - initial input data provided to the model executor).
As per claim 3, Cai teaches the system of claim 1, wherein the intermediate weight computation system is configured to: analyze logic of one or more methods to generate at least one of the sub-weights (Cai: ¶ 0027 - the input data is also analyzed in a soft weight generation process to generate a plurality of soft weights).
As per claim 4, Cai teaches the system of claim 1, wherein the balance metric is based on a balance of a difference of a characteristic in the input data (Cai: ¶ 0085 - normalization engine that is generally applicable to different circumstances by incorporating multiple different normalization techniques that can be dynamically combined in different ways as determined from the unique feature characteristics of the input data being analyzed).
As per claim 5, Cai teaches the system of claim 1, wherein the balance metric is based on a balance between a first group and a second group in the input data, wherein the first group has the variable applied (Cai: ¶ 0018 - existing normalizers typically implement a single normalization technique that is applied at every layer in a neural network such that the different types of normalization methodologies cannot be taken advantage of when appropriate for different layers within a single network architecture).
As per claim 6, Cai teaches the system of claim 5, wherein: the balance metric is based on a first number of members in the first group and a second number of members in the second group, and methods in the plurality of methods are ranked based on a performance associated with different balances (Cai: ¶ 0016 - different input samples (e.g., the underlying data to be analyzed), for training and/or inference, carry distinctive features for which different statistics may be appropriate to standardize the data for improved performance (e.g., more accurate inferences)).
As per claim 7, Cai teaches the system of claim 1, wherein the dissimilarity metric is based on a dissimilarity of a first method to a second method (Cai: ¶ 0088 - weights due to distinctions between the first input data and the second input data).
As per claim 8, Cai teaches the system of claim 1, wherein: the dissimilarity metric for a first method is based on a number of methods that are indicated as having similar logic (Cai: ¶ 0017 - different normalization techniques have been developed in the past including group normalization (GN) where GN techniques divide the channels into groups and estimate the statistics for standardization of data within each group (each group has similar standard data or logic)).
As per claim 9, Cai teaches the system of claim 8, wherein: a sub-weight for the dissimilarity metric for a first method is decreased based on the first method having logic being similar to methods in the number of methods (Cai: ¶ 0053 - soft weighting engine 502 includes an example spatial aggregation analyzer 504 that aggregates the input data to reduce the data to a vector (e.g., the C-dimensional feature vector 308 of FIG. 3)).
As per claim 10, Cai teaches the system of claim 8, wherein: a sub-weight for the dissimilarity metric for a first method is 1/number of methods (Cai: ¶ 0020 - Example normalization engines also include a soft weighting engine to dynamically generate weights indicative of the contribution of the outputs of the multiple different normalizers).
As per claim 15, Cai teaches the system of claim 1, wherein combine the plurality of sub-weights comprises: generate an average of the plurality of sub-weights for a method (Cai: Fig. 7 - multiply ones of the alternate normalized outputs by respective ones of the soft weights and calculate final normalization output as sum of the weighted alternate normalized outputs (average can easily be generated if sum of the weighted output is known)).
As per claim 16, Cai teaches the method of claim 1, wherein combine the plurality of sub-weights comprises: determine variable values for the plurality of sub-weights, wherein the variable values are generated based on an optimal combination of the sub-weights, and generate the final weight based on the variable values and the plurality of sub-weights for a method (Cai: ¶ 0064 - a training algorithm is used to train a model to operate in accordance with patterns and/or associations based on, for example, training data. In general, the model includes internal parameters that guide how input data is transformed into output data, such as through a series of nodes and connections within the model to transform input data into output data. Among other things, these internal model parameters may define the particular normalization techniques implemented by the DSN engine 414 as well as the process to generate the soft weights 212 that are multiplied with the outputs of the different normalization techniques).
As per claim 17, the claim resembles claim 1 and is rejected under the same rationale.
As per claim 18, the claim resembles claim 2 and is rejected under the same rationale.
As per claim 19, the claim resembles claim 16 and is rejected under the same rationale.
As per claim 20, the claim resembles claim 1 and is rejected under the same rationale while Cai also teaches a non-transitory computer-readable storage medium comprising instructions for controlling the one or more computer processors to be operable (Cai: ¶ 0096 - one non-transitory computer readable medium comprising instructions that, when executed, cause at least one processor to at least generate a plurality of alternate normalized outputs).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over US2023/0274132
issued to Cai et al. (Cai) in view of US 2018/0225720 issued to Singh.
As per claim 11, Cai teaches the system of claim 1 however does not explicitly teach wherein the reliability metric is based on a confidence score that is associated with the intermediate result for a method.
Singh however explicitly teaches wherein the reliability metric is based on a confidence score that is associated with the intermediate result for a method (Singh: ¶ 0057 - the weights may indicate, for example, a reliability of or a confidence in the confidence scores).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Cai in view of Singh to teach wherein the reliability metric is based on a confidence score that is associated with the intermediate result for a method. One would be motivated to do so as the weights may indicate, for example, a reliability of or a confidence in the confidence scores (Singh: ¶ 0057).
As per claim 12, the modified teaching of Cai teaches the system of claim 11, wherein the confidence score is output by the method based on generating the intermediate result (Singh: ¶ 0019 - generate derivative target information associated with the classifications (intermediate result) and then generate a confidence score associated with the target information).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Cai in view of Singh to teach wherein the confidence score is output by the method based on generating the intermediate result. One would be motivated to do so as first you generate derivative target information associated with the classifications (intermediate result) and then generate a confidence score associated with the target information (Singh: ¶ 0019).
As per claim 13, the modified teaching of Cai teaches the system of claim 11, wherein the confidence score is determined by predicting a confidence of generating the intermediate result (Singh: ¶ 0057 - the weights may indicate, for example, a reliability of or a confidence in the confidence scores 362. If data used to generate the target information 226 and/or the confidence score 362 is consistent with a larger quantity of other data (e.g., data used to generate other target information 226 and/or confidence scores 362), the metric component 360 may have more confidence in the confidence score).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Cai in view of Singh to teach wherein the confidence score is determined by predicting a confidence of generating the intermediate result. One would be motivated to do so as the weights may indicate, for example, a reliability of or a confidence in the confidence scores. If data used to generate the target information and/or the confidence score is consistent with a larger quantity of other data (e.g., data used to generate other target information and/or confidence scores), the metric component may have more confidence in the confidence score (Singh: ¶ 0057).
As per claim 14, the modified teaching of Cai teaches the system of claim 11, wherein the sub-weight based on the reliability metric is determined using 1/confidence score (Singh: ¶ 0057 - the metric component calculates or generates one or more weights (sub-weights) for adjusting the confidence scores wherein the weights may indicate, for example, a reliability of or a confidence in the confidence scores).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Cai in view of Singh to teach wherein the sub-weight based on the reliability metric is determined using 1/confidence score. One would be motivated to do so as the metric component calculates or generates one or more weights (sub-weights) for adjusting the confidence scores wherein the weights may indicate, for example, a reliability of or a confidence in the confidence scores (Singh: ¶ 0057).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SM AZIZUR RAHMAN whose telephone number is (571)270-7360. The examiner can normally be reached on M-F Telework;
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ali Shayanfar can be reached on 571-270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only.
For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SM A RAHMAN/Primary Examiner, Art Unit 2434