DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the amendment filed on Nov. 25th, 2025. The amendments are linked to the original application filed on Dec. 27th, 2021.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on Nov. 25th, 2025 has been entered.
Response to Amendment
The Examiner thanks the applicant for the remarks, edits and arguments.
Regarding Claim Rejections – 35 U.S.C. 112(b)
Applicant Remarks:
The applicant has amended claims 11 and 20. The applicant believes that the amendments made clarify the claims and they no longer recite indefinite terms. For this reason, the applicant believes that the 112(b) rejection should be withdrawn.
Examiner Response:
The examiner has reviewed the amended claims and they no longer recite indefinite terms. Therefore, the rejection under 35 U.S.C. 112(b) has been withdrawn.
Regarding Claim Rejections – 35 U.S.C. 101
Applicant Remarks:
The applicant has made amendments to the claims to no longer recite mental concepts r abstract ideas. The applicant argues that the addition to the claims recite a process using a proxy or compressed machine learning model to perform the actions of the method. Therefore, a human would not be able to perform the actions mentally because a proxy network is used as in this system. Further, according to the applicant, a human would not be able to perform the actions in the amended claims mentally or with the assistance of pen and paper.
The applicant argues that even if the amended claims still recite an abstract idea of mental concept, the claims as a whole integrate the invention into a practical application. An example given by the applicant states that the claimed invention is able to compress a very large machine learning model to have less parameters and is still able to generate similar results. Because of these reasons the applicant believes that the rejection under 35 U.S.C. 101 should be withdrawn.
Examiner Response:
After each amendment the examiner is required to reevaluate the claims to ensure they recite patent eligible subject matter. The amendments in this applicant have been reviewed using the Alice/Mayo test proposed in the MPEP. After evaluation of the independent claims the examiner has still notice abstract ideas. For example, the claim recites, “selecting, by a computer system, a trained machine learning model and a proxy compressed version of the machine learning model”. This process is considered an abstract idea by the examiner because it involves the concepts of evaluating data, i.e. the models, and arbitrarily selecting the models. A human, with the assistance of pen and paper, or a generic computer, would be able to review and evaluate machine learning models. A human is also able to use the evaluation of the models to formulate an opinion. For these reasons the examining believes this limitation recites an abstract idea. There are other examples of the claims that recite abstract ideas, see 101 rejection below.
If any of the claims recite abstract idea, or other patent ineligible subject matter, further testing is required to see if the claims recite sufficiently more than an abstract idea. In this step the examiner must review all of the remaining limitations as a whole to see if they integrate the abstract ideas into a practical application and if the claims recite an improvement to a technical field or device. While evaluating the remaining limitations the examiner has found some limitations that fail to further recite sufficient evidence that they provide an improvement to a technical field. As stated in the 101 rejection, some of the remaining limitations recite subject matter which is consider well understood or routine or merely steps or instructions to use the abstract idea. Examples of this would be from the independent claims which recite, “executing the trained machine learning model on a first dataset to generate first model metadata …” or “executing the compressed machine learning model on a second dataset …”. These are two examples of steps or instructions which are also well know. Many machine learning models will execute the model after being trained and these are merely steps to implement an abstract idea.
Finally for the reasons stated above and after reviewing the amended claims, specification, and the applicant remarks the examiner believes that the current amened claims still recite patent ineligible subject matter. Therefore, the rejection under 35 U.S.C. is upheld, see 101 rejection below.
Regarding Claim Rejections – 35 U.S.C. 103
Applicant Remarks:
The applicant has amended the claims to clarify distinguish the claims from the proposed art. The applicant has amended the claims to better clarify the unit tests and notes that the proposed art fails to teach unit tests or similar testing concepts. For these reasons and after the amended made, the applicant argues that the current claims are novel and discernable from the proposed arts, therefore, the rejection under 35 U.S.C. 103 should be withdrawn.
Examiner Response:
After each amendment the examiner must perform a thorough and complete search to see if the amended claims recite previously presented subject matter. After completing this search, the examiner has taken into consideration the amendments and has found art which is able to disclose the claimed subject matter. Also, it is of note that the art previously presented is still applicable and is used in the 103 rejection below. Therefore, the rejection under 35 U.S.C. 103 is upheld, see 103 rejection below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”).
Claim 1
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
Claim 1, recites “A method, comprising:” therefore it is directed to the statutory category of a process.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites, inter alia:
“selecting, by a computer system, a trained machine learning model and a” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to use a generic computer to observe and select a machine learning model and corresponding compressed model. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
“executing, by the computing system, the plurality of unit tests to compare the first model metadata to the second model metadata; and” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate a model using tests and other evaluation techniques. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
“determining, based on results of the unit tests, whether a second deviation between the first model metadata and the second model metadata exceeds a predefined threshold associated with at least one of the metric categories.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate values and from that evaluation make a judgement or opinion. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “executing the trained machine learning model on a first dataset to generate first model metadata comprising at least one of training loss values, validation loss values, or inference outputs;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“executing the compressed machine learning model on a second dataset, the second dataset comprising at least a subset of the first dataset or being substantially equivalent, to generate second model metadata comprising corresponding values to the first model metadata;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“automatically generating a plurality of unit tests based on one or more types of metric categories comprising at least one of inner model metrics, output metrics, or evolution metrics, wherein a first unit test that is included among the plurality of unit tests is an inner model metric unit test, and wherein the inner model metric unit test attempts to measure a first deviation from an established inner model metric involving at least one of: a single hidden layer, two or more hidden layers, or interactions between the two or more hidden;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “executing the trained machine learning model on a first dataset to generate first model metadata comprising at least one of training loss values, validation loss values, or inference outputs;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“executing the compressed machine learning model on a second dataset, the second dataset comprising at least a subset of the first dataset or being substantially equivalent, to generate second model metadata comprising corresponding values to the first model metadata;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“automatically generating a plurality of unit tests based on one or more types of metric categories comprising at least one of inner model metrics, output metrics, or evolution metrics, wherein a first unit test that is included among the plurality of unit tests is an inner model metric unit test, and wherein the inner model metric unit test attempts to measure a first deviation from an established inner model metric involving at least one of: a single hidden layer, two or more hidden layers, or interactions between the two or more hidden;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 2
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites, inter alia:
“further comprising automatically generating unit tests, wherein comparing the metadata from the machine learning model and the metadata from the proxy compressed version of the machine learning model comprises performing the unit tests on the metadata from the machine learning model and the metadata from the proxy compressed version of the machine learning model.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to compare and evaluate test data from machine learning models. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
This claim does not recite any additional limitations which integrate the abstract idea into a practical application.
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea and thus the claim is subject-matter ineligible.
Claim 3
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein the unit tests include output metric unit tests or evolution metric unit tests.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the unit tests include output metric unit tests or evolution metric unit tests.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 4
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “further comprising generating metadata from the proxy compressed version of the machine learning model upon detection of a change and determining whether a behavior of the model still falls within a given threshold using the metadata from the proxy compressed version of the machine learning model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “further comprising generating metadata from the proxy compressed version of the machine learning model upon detection of a change and determining whether a behavior of the model still falls within a given threshold using the metadata from the proxy compressed version of the machine learning model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 5
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein the change is at least one of a data ETL (Extract-Transform-Load) change, a library update, a library rollback, a codebase change, a hardware change, a pipeline change, a dataset change or combination thereof.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the change is at least one of a data ETL (Extract-Transform-Load) change, a library update, a library rollback, a codebase change, a hardware change, a pipeline change, a dataset change or combination thereof.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 6
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “further comprising generating metadata for a second machine learning model and metadata for a second compressed machine learning model corresponding to the second model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “further comprising generating metadata for a second machine learning model and metadata for a second compressed machine learning model corresponding to the second model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 7 (Cancelled)
Claim 8
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “further comprising recommending additional unit tests and presenting a user interface that allows more additional unit tests to be created.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “further comprising recommending additional unit tests and presenting a user interface that allows more additional unit tests to be created.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 9
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites, inter alia:
“wherein determining whether a behavior of the proxy compressed version of the machine learning model is within a threshold value further comprises determining whether a behavior of the model is within the threshold.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and determine if it meets specified criteria. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
This claim does not recite any additional limitations which integrate the abstract idea into a practical application.
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea and thus the claim is subject-matter ineligible.
Claim 10
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein each of the unit tests is a soft unit test or a hard unit test, wherein the unit tests are configured to detect a deviation in behavior.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein each of the unit tests is a soft unit test or a hard unit test, wherein the unit tests are configured to detect a deviation in behavior.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 11
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein the machine learning model comprises at least a trillion parameters.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the machine learning model comprises at least a trillion parameters.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 12
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
Claim 12, recites “A non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations comprising:” therefore it is directed to the statutory category of a machine.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites, inter alia:
“selecting, by a computer system, a trained machine learning model and a proxy compressed version of the machine learning model that is generated using a neural network compression technique, wherein the trained machine learning model is provided as an input model to the neural network compression technique to generate the proxy compressed version, and wherein application of the neural network compression technique to the input model results in the proxy compressed version having a reduced number of model parameters relative to the trained machine learning model and further results in behavior metrics of the proxy compressed version being comparable to the machine learning model;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to use a generic computer to observe and select a machine learning model and corresponding compressed model. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
“executing, by the computing system, the plurality of unit tests to compare the first model metadata to the second model metadata; and” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate a model using tests and other evaluation techniques. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
“determining, based on results of the unit tests, whether a second deviation between the first model metadata and the second model metadata exceeds a predefined threshold associated with at least one of the metric categories.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate values and from that evaluation make a judgement or opinion. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “executing the trained machine learning model on a first dataset to generate first model metadata comprising at least one of training loss values, validation loss values, or inference outputs;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“executing the compressed machine learning model on a second dataset, the second dataset comprising at least a subset of the first dataset or being substantially equivalent, to generate second model metadata comprising corresponding values to the first model metadata;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“automatically generating a plurality of unit tests based on one or more types of metric categories comprising at least one of inner model metrics, output metrics, or evolution metrics, wherein a first unit test that is included among the plurality of unit tests is an inner model metric unit test, and wherein the inner model metric unit test attempts to measure a first deviation from an established inner model metric involving at least one of: a single hidden layer, two or more hidden layers, or interactions between the two or more hidden;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “executing the trained machine learning model on a first dataset to generate first model metadata comprising at least one of training loss values, validation loss values, or inference outputs;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“executing the compressed machine learning model on a second dataset, the second dataset comprising at least a subset of the first dataset or being substantially equivalent, to generate second model metadata comprising corresponding values to the first model metadata;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“automatically generating a plurality of unit tests based on one or more types of metric categories comprising at least one of inner model metrics, output metrics, or evolution metrics, wherein a first unit test that is included among the plurality of unit tests is an inner model metric unit test, and wherein the inner model metric unit test attempts to measure a first deviation from an established inner model metric involving at least one of: a single hidden layer, two or more hidden layers, or interactions between the two or more hidden;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 13
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites, inter alia:
“further comprising automatically generating unit tests, wherein comparing the metadata from the machine learning model and the metadata from the proxy compressed version of the machine learning model comprises performing the unit tests on the metadata from the machine learning model and the metadata from the proxy compressed version of the machine learning model.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to compare and evaluate test data from machine learning models. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
This claim does not recite any additional limitations which integrate the abstract idea into a practical application.
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea and thus the claim is subject-matter ineligible.
Claim 14
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein the unit tests include output metric unit tests and evolution metric unit tests.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the unit tests include output metric unit tests and evolution metric unit tests.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 15
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “further comprising generating metadata from the proxy compressed version of the machine learning model upon detection of a change and determining whether a behavior of the model still falls within a given threshold using the metadata from the proxy compressed version of the machine learning model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “further comprising generating metadata from the proxy compressed version of the machine learning model upon detection of a change and determining whether a behavior of the model still falls within a given threshold using the metadata from the proxy compressed version of the machine learning model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 16
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein the change is at least one of a data ETL change, a library update, a library rollback, a codebase change, a hardware change, a pipeline change, a dataset change or combination thereof.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the change is at least one of a data ETL change, a library update, a library rollback, a codebase change, a hardware change, a pipeline change, a dataset change or combination thereof.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 17
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “further comprising generating metadata for a second machine learning model and metadata for a second compressed machine learning model corresponding to the second model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “further comprising generating metadata for a second machine learning model and metadata for a second compressed machine learning model corresponding to the second model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 18
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “further comprising recommending additional unit tests and presenting a user interface that allows more additional unit tests to be created.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “further comprising recommending additional unit tests and presenting a user interface that allows more additional unit tests to be created.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 19
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites, inter alia:
“wherein determining whether a behavior of the proxy compressed version of the machine learning model is within a threshold value further comprises determining whether a behavior of the model is within the threshold.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and determine if it meets specified criteria. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
This claim does not recite any additional limitations which integrate the abstract idea into a practical application.
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea and thus the claim is subject-matter ineligible.
Claim 20
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein each of the unit tests is a soft unit test or a hard unit test, wherein the unit tests are configured to detect a deviation in behavior,” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein the machine learning model comprises at least a trillion parameters.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein each of the unit tests is a soft unit test or a hard unit test, wherein the unit tests are configured to detect a deviation in behavior,” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein the machine learning model comprises at least a trillion parameters.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 6, 9, 12, 13, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Asif et al., (Asif et al., “MOBILE AI”, US 2022/0101184 A1, Filed Spt. 2020, hereinafter “Asif”) in view of Fraser et al., (Fraser et al., “EvoSuite: Automatic Test Suite Generation for Object-Oriented Software”, Sept. 2011, hereinafter “Fraser”).
Regarding claim 1, Asif discloses, A method, comprising: selecting, by a computer system, a trained machine learning model and a proxy compressed version of the machine learning model that is generated through a neural network compression technique, wherein the trained machine learning model is provided as an input model to the neural network compression technique to generate the proxy compressed version, and wherein application of the neural network compression technique to the input model results in the proxy compressed version having a reduced number of model parameters relative to the trained machine learning model and further results in behavior metrics of the proxy compressed version being comparable to the trained machine learning model;” (Detailed description, pp. 3, [0036]; “Method 100 further comprises acquiring a model at operation 104. Operation 104 may include, for example, downloading a machine learning model from a repository of models that are known to be accurate. The model acquired at operation 104 may be one of multiple different types of models (e.g., CNNs, RNNs, etc.).” And (Detailed Description, pp. 4, [0041]; “Method 100 further comprises training the student model at operation 108. Operation 108 may include, for example, inputting known training data into the student model, receiving an output from the student model, comparing the output to the known result, and making adjustments to one or more layers of the student model based on the comparison.” Initially a model is selected to be compressed. The model in this application is able to select a trained model form a set of models. After the model is selected the model begins the compression process. Later in the method the compressed model is used, or selected, to be a student model and it is tested against the original uncompressed model.)
“executing the trained machine learning model on a first dataset to generate first model metadata comprising at least one of training loss values, validation loss values, or inference outputs;” (Detailed Description, pp. 4, [0042]; “In some embodiments, the adjustments made to the layers of the student model as part of operation 108 may be further based on comparisons between the outputs from the teacher models to the output of the student model. In general, an output of the student model (a "student output") and outputs of the teacher models ("teacher outputs") are compared to minimize a difference between them during the training process.” And (Detailed Description, pp. 3, [0036]; “The model acquired at operation 104 may be one of multiple different types of models (e.g., CNNs, RNNs, etc.).” The student model in this article is interpreted to be the compressed or proxy model. This student model executes after it is compressed to produce an output. This output is then compared to the original model, interpreted as the teacher model. The output in this article is interpreted to be a model’s metadata. This article discloses that different types of models can be input into the system. This would CNN models which typically use forward and backward propagation and loss functions to improve model accuracy.)
“executing the compressed machine learning model on a second dataset, the second dataset comprising at least a subset of the first dataset or being substantially equivalent, to generate second model metadata comprising corresponding values to the first model metadata;” (Detailed Description, pp. 4, [0042]; “In some embodiments, the adjustments made to the layers of the student model as part of operation 108 may be further based on comparisons between the outputs from the teacher models to the output of the student model. In general, an output of the student model (a "student output") and outputs of the teacher models ("teacher outputs") are compared to minimize a difference between them during the training process.” The student model in this article is interpreted to be the compressed or proxy model. This student model executes after it is compressed to produce an output. This output is then compared to the original model, interpreted as the teacher model. The output in this article is interpreted to be a model’s metadata.)
Asif fails to explicitly disclose, “automatically generating a plurality of unit tests based on one or more types of metric categories comprising at least one of inner model metrics, output metrics, or evolution metrics, wherein a first unit test that is included among the plurality of unit tests is an inner model metric unit test, and wherein the inner model metric unit test attempts to measure a first deviation from an established inner model metric involving at least one of: a single hidden layer, two or more hidden layers, or interactions between the two or more hidden;”, “executing, by the computing system, the plurality of unit tests to compare the first model metadata to the second model metadata; and”, and “determining, based on results of the unit tests, whether a second deviation between the first model metadata and the second model metadata exceeds a predefined threshold associated with at least one of the metric categories.”.
However, Fraser discloses, “automatically generating a plurality of unit tests based on one or more types of metric categories comprising at least one of inner model metrics, output metrics, or evolution metrics, wherein a first unit test that is included among the plurality of unit tests is an inner model metric unit test, and wherein the inner model metric unit test attempts to measure a first deviation from an established inner model metric involving at least one of: a single hidden layer, two or more hidden layers, or interactions between the two or more hidden;” (Introduction, pp. 417; "EvoSuite fully automatically produces these test suites for individual classes or entire projects, without requiring complicated manual steps. The tool is freely available, and can be used on the command line, as a plugin to the Eclipse development platform, or through a web interface. "This article discloses a known unit testing software called EvoSuite. This will automatically generate unit test for many forms of software including machine learning models. This can take in many forms of data depending on the unit test which can include output data of a model.)
“executing, by the computing system, the plurality of unit tests to compare the first model metadata to the second model metadata; and” (Mutation-Based Assertion Generation, pp. 418; "After the test case generation process EvoSuite runs each test case on the unmodified software as well as all mutants that are covered by the test case, while recording information on which mutants are detected by which assertions. Then, EvoSuite calculates a reduced set of assertions that is sufficient in order to detect all the mutants of the U UT that the given test case may reveal." After the tests cases are generated, they can be executed by this software. The unit tests will run at a specified time of the developer or automatically depending on where in the software code the test command is placed.)
“determining, based on results of the unit tests, whether a second deviation between the first model metadata and the second model metadata exceeds a predefined threshold associated with at least one of the metric categories.” (Mutation-Based Assertion Generation, pp. 417; "The aim of producing a coverage test suite is to provide the developer with a representative set of test cases. This allows the software engineer to check and capture the functional correctness of the UUT that cannot be captured with automated oracles. In order to do so, test cases need some kind of manual oracles, which in the case of unit tests typically are assertions. The test oracle problem is one of the main problems in traditional white-box test generation. Given an automatically generated unit test, there is only a finite number of things one can assert - the choice of assertions is defined by the possible observations one can make on the public API of the UUT and its dependent classes" The unit tests that can be generated with this software can be assertion testing. This can consist of a developer to set the testing metrics to make certain assertions of the tested code. This includes, but not limited to, pass/fail assertions, output value equality assertions, values within threshold, etc.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Asif and Fraser. Asif teaches a machine learning model compression technique where they are able to generate a compressed version of an initial machine learning model and compare the original and compressed models for similar performance. Fraser teaches automatic unit test generation for partial or complete machine learning models. One of ordinary skill would have motivation to combine a system that is able to compress and test machine learning models and a system that is able to automatically generate unit tests for different machine learning models to improve accuracy of stored models, “EvoSuite is a tool that automatically produces test suites for Java programs that achieve high code coverage and provide assertions. EvoSuite implements several novel techniques, leading to higher structural coverage and an efficient selection of assertions based on seeded defects, which is a critical feature that other Java tools miss. EvoSuite currently supports branch coverage and mutation testing as test objectives, but we are also working on adding further criteria, such as criteria based on data flow, and a further focus of our research is to produce more readable test cases [6].” (Fraser, Conclusions, pp. 419).
Regarding claim 2, Fraser discloses, “automatically generating unit tests, wherein comparing the metadata from the machine learning model and the metadata from the proxy compressed version of the machine learning model comprises performing the unit tests on the metadata from the machine learning model and the metadata from the proxy compressed version of the machine learning model.” (Abstract, pp. 416; "This paper presents EvoSuite, a tool that automatically generates test cases with assertions for classes written in Java code. To achieve this, EvoSuite applies a novel hybrid approach that generates and optimizes whole test suites towards satisfying a coverage criterion. For the produced test suites, EvoSuite suggests possible oracles by adding small and effective sets of assertions that concisely summarize the current behavior; these assertions allow the developer to detect deviations from expected behavior, and to capture the current behavior in order to protect against future defects breaking this behavior." This article discloses an automated unit testing software. This software contains an API that allows the developer generate and use unit testing on any form of software which can include but not limited to, testing outputs of machine learning models and comparing them within given thresholds. This software will also allow for alerting the user of changes in the test code which resulted in failed assertion tests.)
Regarding claim 6, Asif discloses, “generating metadata for a second machine learning model and metadata for a second compressed machine learning model corresponding to the second model.” (Detailed Description, pp. 3, [0037]; “Method 100 further comprises pruning the selected machine learning model based on the performance requirements at operation 106. Operation 106 may include, for example, deleting one or more layers from the selected model.” The model in this application is able to select an already pertained model and compress it. This model designates a teacher, or original, model and student, or compressed, model. The compressed model is a representation of the original model with fewer parameters. This is achieved using pruning techniques.)
Regarding claim 9, Asif discloses, “wherein determining whether a behavior of the proxy compressed version of the machine learning model is within a threshold value further comprises determining whether a behavior of the model is within the threshold.” (Detailed Description, pp. 5, [0045]; “Method 100 further comprises determining whether the trained student model meets performance requirements at operation 112. Operation 112 may include, for example, comparing the performance of the student model determined at operation 110 to the performance requirements gathered at operation 102. As an example, operation 112 may include determining whether the student model 's memory usage is below a maximum threshold indicated by device hardware, whether the student model's inference speed is above a minimum set by a user, and so on.” After the model is compressed, this system will evaluate the model to ensure it meet performance requirements. This will evaluate both the student and the teacher model to ensure that the student model meets specified criteria.)
Regarding claim 12, Asif discloses, “A non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations comprising:” (Detailed Description, pp. 12, [0104]; “The computer system 800 may contain one or more general-purpose programmable central processing units (CPUs) 802, some or all of which may include one or more cores 804A, 804B, 804C, and 804D, herein generically referred to as the CPU 802. In some embodiments, the computer system 800 may contain multiple processors typical of a relatively large system; however, in other embodiments the computer system 800 may alternatively be a single CPU system. Each CPU 802 may execute instructions stored in the memory subsystem 808 on a CPU core 804 and may comprise one or more levels of on-board cache.” Asif discloses the use of a computing system containing processors coupled to memory which contains computer instructions to execute the disclosed process.”
“selecting, by a computer system, a trained machine learning model and a proxy compressed version of the machine learning model that is generated using a neural network compression technique, wherein the trained machine learning model is provided as an input model to the neural network compression technique to generate the proxy compressed version, and wherein application of the neural network compression technique to the input model results in the proxy compressed version having a reduced number of model parameters relative to the trained machine learning model and further results in behavior metrics of the proxy compressed version being comparable to the machine learning model;” (Detailed description, pp. 3, [0036]; “Method 100 further comprises acquiring a model at operation 104. Operation 104 may include, for example, downloading a machine learning model from a repository of models that are known to be accurate. The model acquired at operation 104 may be one of multiple different types of models (e.g., CNNs, RNNs, etc.).” And (Detailed Description, pp. 4, [0041]; “Method 100 further comprises training the student model at operation 108. Operation 108 may include, for example, inputting known training data into the student model, receiving an output from the student model, comparing the output to the known result, and making adjustments to one or more layers of the student model based on the comparison.” Initially a model is selected to be compressed. The model in this application is able to select a trained model form a set of models. After the model is selected the model begins the compression process. Later in the method the compressed model is used, or selected, to be a student model and it is tested against the original uncompressed model.)
“executing the trained machine learning model on a first dataset to generate first model metadata comprising at least one of training loss values, validation loss values, or inference outputs;” (Detailed Description, pp. 4, [0042]; “In some embodiments, the adjustments made to the layers of the student model as part of operation 108 may be further based on comparisons between the outputs from the teacher models to the output of the student model. In general, an output of the student model (a "student output") and outputs of the teacher models ("teacher outputs") are compared to minimize a difference between them during the training process.” And (Detailed Description, pp. 3, [0036]; “The model acquired at operation 104 may be one of multiple different types of models (e.g., CNNs, RNNs, etc.).” The student model in this article is interpreted to be the compressed or proxy model. This student model executes after it is compressed to produce an output. This output is then compared to the original model, interpreted as the teacher model. The output in this article is interpreted to be a model’s metadata. This article discloses that different types of models can be input into the system. This would CNN models which typically use forward and backward propagation and loss functions to improve model accuracy.)
“executing the compressed machine learning model on a second dataset, the second dataset comprising at least a subset of the first dataset or being substantially equivalent, to generate second model metadata comprising corresponding values to the first model metadata;” (Detailed Description, pp. 4, [0042]; “In some embodiments, the adjustments made to the layers of the student model as part of operation 108 may be further based on comparisons between the outputs from the teacher models to the output of the student model. In general, an output of the student model (a "student output") and outputs of the teacher models ("teacher outputs") are compared to minimize a difference between them during the training process.” The student model in this article is interpreted to be the compressed or proxy model. This student model executes after it is compressed to produce an output. This output is then compared to the original model, interpreted as the teacher model. The output in this article is interpreted to be a model’s metadata.)
Asif fails to explicitly disclose, “automatically generating a plurality of unit tests based on one or more types of metric categories comprising at least one of inner model metrics, output metrics, or evolution metrics, wherein a first unit test that is included among the plurality of unit tests is an inner model metric unit test, and wherein the inner model metric unit test attempts to measure a first deviation from an established inner model metric involving at least one of: a single hidden layer, two or more hidden layers, or interactions between the two or more hidden;”, “executing, by the computing system, the plurality of unit tests to compare the first model metadata to the second model metadata; and”, and “determining, based on results of the unit tests, whether a second deviation between the first model metadata and the second model metadata exceeds a predefined threshold associated with at least one of the metric categories.”.
However, Fraser discloses, “automatically generating a plurality of unit tests based on one or more types of metric categories comprising at least one of inner model metrics, output metrics, or evolution metrics, wherein a first unit test that is included among the plurality of unit tests is an inner model metric unit test, and wherein the inner model metric unit test attempts to measure a first deviation from an established inner model metric involving at least one of: a single hidden layer, two or more hidden layers, or interactions between the two or more hidden;” (Introduction, pp. 417; "EvoSuite fully automatically produces these test suites for individual classes or entire projects, without requiring complicated manual steps. The tool is freely available, and can be used on the command line, as a plugin to the Eclipse development platform, or through a web interface. "This article discloses a known unit testing software called EvoSuite. This will automatically generate unit test for many forms of software including machine learning models. This can take in many forms of data depending on the unit test which can include output data of a model.)
“executing, by the computing system, the plurality of unit tests to compare the first model metadata to the second model metadata; and” (Mutation-Based Assertion Generation, pp. 418; "After the test case generation process EvoSuite runs each test case on the unmodified software as well as all mutants that are covered by the test case, while recording information on which mutants are detected by which assertions. Then, EvoSuite calculates a reduced set of assertions that is sufficient in order to detect all the mutants of the U UT that the given test case may reveal." After the tests cases are generated, they can be executed by this software. The unit tests will run at a specified time of the developer or automatically depending on where in the software code the test command is placed.)
“determining, based on results of the unit tests, whether a second deviation between the first model metadata and the second model metadata exceeds a predefined threshold associated with at least one of the metric categories.” (Mutation-Based Assertion Generation, pp. 417; "The aim of producing a coverage test suite is to provide the developer with a representative set of test cases. This allows the software engineer to check and capture the functional correctness of the UUT that cannot be captured with automated oracles. In order to do so, test cases need some kind of manual oracles, which in the case of unit tests typically are assertions. The test oracle problem is one of the main problems in traditional white-box test generation. Given an automatically generated unit test, there is only a finite number of things one can assert - the choice of assertions is defined by the possible observations one can make on the public API of the UUT and its dependent classes" The unit tests that can be generated with this software can be assertion testing. This can consist of a developer to set the testing metrics to make certain assertions of the tested code. This includes, but not limited to, pass/fail assertions, output value equality assertions, values within threshold, etc.)
Regarding claim 13, Fraser discloses, “automatically generating unit tests, wherein comparing the metadata from the machine learning model and the metadata from the proxy compressed version of the machine learning model comprises performing the unit tests on the metadata from the machine learning model and the metadata from the proxy compressed version of the machine learning model.” (Abstract, pp. 416; "This paper presents EvoSuite, a tool that automatically generates test cases with assertions for classes written in Java code. To achieve this, EvoSuite applies a novel hybrid approach that generates and optimizes whole test suites towards satisfying a coverage criterion. For the produced test suites, EvoSuite suggests possible oracles by adding small and effective sets of assertions that concisely summarize the current behavior; these assertions allow the developer to detect deviations from expected behavior, and to capture the current behavior in order to protect against future defects breaking this behavior." This article discloses an automated unit testing software. This software contains an API that allows the developer generate and use unit testing on any form of software which can include but not limited to, testing outputs of machine learning models and comparing them within given thresholds. This software will also allow for alerting the user of changes in the test code which resulted in failed assertion tests.)
Regarding claim 17, Asif discloses, “further comprising generating metadata for a second machine learning model and metadata for a second compressed machine learning model corresponding to the second model.” (Detailed Description, pp. 3, [0036]; “Method 100 further comprises acquiring a model at operation 104. Operation 104 may include, for example, downloading a machine learning model from a repository of models that are known to be accurate. The model acquired at operation 104 may be one of multiple different types of models (e.g., CNNs, RNNs, etc.).” And (Detailed Description, pp. 5, [0044]; “Method 100 further comprises evaluating performance of the pruned/trained student model at operation 110. Operation 110 may include, for example, inputting different known data (e.g., a second set of known data different than the training data) into the student model and analyzing performance of the student model in determining an output.” The model in Asif is compatible with many different machine learning models. This article is able to take in a machine learning model from a set of models and compress it. The compressed model will be a version of the original and tests are required to ensure compressed model accuracy. IF this is not met the system will train a new compressed model.)
Regarding claim 19, Asif discloses, “wherein determining whether a behavior of the proxy compressed version of the machine learning model is within a threshold value further comprises determining whether a behavior of the model is within the threshold.” (Detailed Description, pp. 5, [0045]; “Method 100 further comprises determining whether the trained student model meets performance requirements at operation 112. Operation 112 may include, for example, comparing the performance of the student model determined at operation 110 to the performance requirements gathered at operation 102. As an example, operation 112 may include determining whether the student model 's memory usage is below a maximum threshold indicated by device hardware, whether the student model's inference speed is above a minimum set by a user, and so on.” After the compressed model is generated and trained it is tested against the original model. The compressed model must meet performance requirements to ensure accuracy or the system will generate a new student model and train it. This model is able to determine if the compressed model meets a performance threshold.)
Claims 3, 8, 10, 14, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Asif and Fraser in view of Lui et al., (Lui et al., “SYSTEM AND METHOD FOR TESTING MACHINE LEARNING”, Filed Apr. 2021, hereinafter “Lui”).
Regarding claim 3, Lui discloses, “wherein the unit tests include output metric unit tests or evolution metric unit tests.” (Detailed Description, pp.16 [0084], lines 17-23"; Combining the three approaches above, it looks like there is a reasonable way to "unit test" least squares learning algorithms, by testing whether the code is written properly, the algorithm trains and the algorithm generalizes, using the language of statistical hypothesis 20 testing. Applicants recall that this can be viewed as a generalization of unit test, since the expectation of an ML is that it learns from training, generalizes to validation and is robust to test time distributional shifts. As ML software involves randomness, this is a reasonable expected behavior one can hope for." In this invention the applicant discloses how the unit tests will be executed. These tests will include looking at the input of the code and how the code changes overtime.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Asif, Fraser and Lui. Asif teaches a machine learning model compression technique where they are able to generate a compressed version of an initial machine learning model and compare the original and compressed models for similar performance. Fraser teaches automatic unit test generation for partial or complete machine learning models. Lui teaches a machine learning model which is able to monitor and evaluate other machine learning models different testing techniques. One of ordinary skill would have motivation to combine a system that is able to compress and test machine learning models and a system that is able to automatically generate unit tests for different machine learning models to improve accuracy of stored models with a machine learning system which is able to detect drifting machine learning models using different testing techniques, “An objective of the proposed technical approach described herein in various embodiments is to distinguish between different failure modes that plague machine learning systems: (1) there is little or no signal in the data; (2) signal is present, but the model class is unable to learn such information; (3) there is too much change between training and test distributions. Another potential issue is simply that whether the ML system code is written properly (the system can learn from simulated datasets where the pattern is known).” (Lui, Summary, pp. 3, [0011], In. 13-18.).
Regarding claim 8, Lui discloses, “further comprising recommending additional unit tests and presenting a user interface that allows more additional unit tests to be created.” (Summary, pp. 5, [0019], lines 3-6; "In some embodiments, the system described herein is configured to automatically generate suggestions of remedial steps and may provide interface buttons or other interactive controls where a user may be able to automatically instantiate a suggested remedial step, such as obtaining a different training or test distribution, among others.", This system can be used to generate a user interface to look at test results or alter the unit tests. This system also gives the user additional help to correct the code being tested.)
Regarding claim 10, Lui discloses, “wherein each of the unit tests is a soft unit test or a hard unit test, wherein the unit tests are configured to detect a deviation in behavior.” (Detailed Description, pp. 12, [0059], lines 20- 26; "Perhaps one of the simplest ways to illustrate testing ML systems is through least square linear regression. The purpose of this section is to prepare describe the unit testing ML problem. Applicants begin by looking at how this can be done for linear least squares. When a linear model doesn't learn from the training dataset, it is typically due to two reasons: 1 (model problem). the model learning code is not implemented correctly; 2. (data problem) the dataset's (X, Y) contains no learnable information (e.g. the dataset contains no learnable pattern or the data is improperly processed).", This system is designed to test multiple different points in the code using a different granularity of unit tests. The model is tested in and the data set is tested.)
Regarding claim 14, Lui discloses, “wherein the unit tests include output metric unit tests and evolution metric unit tests.” (Detailed Description, pp.16 [0084], lines 17-23"; Combining the three approaches above, it looks like there is a reasonable way to "unit test" least squares learning algorithms, by testing whether the code is written properly, the algorithm trains and the algorithm generalizes, using the language of statistical hypothesis 20 testing. Applicants recall that this can be viewed as a generalization of unit test, since the expectation of an ML is that it learns from training, generalizes to validation and is robust to test time distributional shifts. As ML software involves randomness, this is a reasonable expected behavior one can hope for." In this invention the applicant discloses how the unit tests will be executed. These tests will include looking at the input of the code and how the code changes over time.)
Regarding claim 18, Lui discloses, “further comprising recommending additional unit tests and presenting a user interface that allows more additional unit tests to be created.” (Summary, pp. 5, [0019], lines 3-6; "In some embodiments, the system described herein is configured to automatically generate suggestions of remedial steps and may provide interface buttons or other interactive controls where a user may be able to automatically instantiate a suggested remedial step, such as obtaining a different training or test distribution, among others.", This system can be used to generate a user interface to look at test results or alter the unit tests. This system also gives the user additional help to correct the code being tested.)
Claims 4, 5, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Asif and Fraser in view of Hasegawa, (Hasegawa, “MACHINE-LEARNING MODEL FRAUD DETECTION SYSTEM AND FRAUD DETERCTION METHOD”, Filed Apr. 2018, hereinafter “Hasegawa”).
Regarding claim 4, Hasegawa discloses, “further comprising generating metadata from the proxy compressed version of the machine learning model upon detection of a change” (Means for Solving the Problems, pp. 11, Col. 4, lines 4-16; "Further, according to the present invention, either one of the above Page 69 machine-learning model fraud detection systems is such that, when receiving a test data transmission request from the user apparatus using the test data-trained model, the license management unit reads the test data from the fraud detection data holding unit and transmits the test data including dummy data to the user apparatus, and the fraud determination unit of the model verification unit inputs output data obtained by executing the test data in a user model in the user apparatus, and compares the output data with the output value stored in the fraud detection data holding unit to determine whether the user model is fraudulent or not.", This invention will being the process of validating the user model after receiving a test data transmission request.)
Fraser and Hasegawa fail to explicitly disclose, “and determining whether a behavior of the model still falls within a given threshold using the metadata from the proxy compressed version of the machine learning model.”.
However, Asif discloses, “and determining whether a behavior of the model still falls within a given threshold using the metadata from the proxy compressed version of the machine learning model.” (Detailed Description, pp. 5, [0045]; “Method 100 further comprises determining whether the trained student model meets performance requirements at operation 112. Operation 112 may include, for example, comparing the performance of the student model determined at operation 110 to the performance requirements gathered at operation 102. As an example, operation 112 may include determining whether the student model 's memory usage is below a maximum threshold indicated by device hardware, whether the student model's inference speed is above a minimum set by a user, and so on.” This model is able to evaluate the compressed model and ensure it meets specified performance standards. This model will evaluate the original and compressed model using input data to produce an outcome.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Asif, Fraser and Hasegawa. Asif teaches a machine learning model compression technique where they are able to generate a compressed version of an initial machine learning model and compare the original and compressed models for similar performance. Fraser teaches automatic unit test generation for partial or complete machine learning models. Hasegawa teaches a machine learning model which is able to monitor and evaluate other machine learning models for fraud or fraudulent data. One of ordinary skill would have motivation to combine a system that is able to compress and test machine learning models and a system that is able to automatically generate unit tests for different machine learning models to improve accuracy of stored models with a machine learning system which is able to detect fraud in machine learning models and fraudulent data, "Thus, there are advantages of being able to determine whether the user model is fraudulent or not easily by the license/model management apparatus generating the test data trained model to obtain specific output with respect to the test data so as to prevent use of a fraudulent model and hence improve the reliability of the model." (Hasegawa, Advantageous Effect of the Invention, Col. 6 and 7, Ln. 64-67 and 1-3).
Regarding claim 5, Hasegawa discloses, “wherein the change is at least one of a data ETL (Extract-Transform-Load) change, a library update, a library rollback, a codebase change, a hardware change, a pipeline change, a dataset change or combination thereof.” (Means for Solving the Problems, pp. 11, Col. 4, lines 4-16; "Further, according to the present invention, either one of the above machine-learning model fraud detection systems is such that, when receiving a test data transmission request from the user ap pa rat us using the test data-trained model, the license management unit reads the test data from the fraud detection data holding unit and transmits the test data including dummy data to the user apparatus, and the fraud determination unit of the model verification unit inputs output data obtained by executing the test data in a user model in the user apparatus, and compares the output data with the output value stored in the fraud detection data holding unit to determine whether the user model is fraudulent or not.", This invention discloses a fraud protection against unauthorized users. When a change in the test data occurs, this system will check the stored trained model with the data change to ensure the change in data is correct and warranted.)
Regarding claim 15, Hasegawa discloses, “further comprising generating metadata from the proxy compressed version of the machine learning model upon detection of a change” (Means for Solving the Problems, pp. 11, Col. 4, lines 4-16; "Further, according to the present invention, either one of the above machine-learning model fraud detection systems is such that, when receiving a test data transmission request from the user apparatus using the test data-trained model, the license management unit reads the test data from the fraud detection data holding unit and transmits the test data including dummy data to the user apparatus, and the fraud determination unit of the model verification unit inputs output data obtained by executing the test data in a user model in the user apparatus, and compares the output data with the output value stored in the fraud detection data holding unit to determine whether the user model is fraudulent or not.", This invention will being the process of validating the user model after receiving a test data transmission request.)
Hasegawa and Fraser fail to explicitly disclose, “and determining whether a behavior of the model still falls within a given threshold using the metadata from the proxy compressed version of the machine learning model.”.
However, Asif discloses, “and determining whether a behavior of the model still falls within a given threshold using the metadata from the proxy compressed version of the machine learning model.” (Detailed Description, pp. 5, [0045]; “Method 100 further comprises determining whether the trained student model meets performance requirements at operation 112. Operation 112 may include, for example, comparing the performance of the student model determined at operation 110 to the performance requirements gathered at operation 102. As an example, operation 112 may include determining whether the student model 's memory usage is below a maximum threshold indicated by device hardware, whether the student model's inference speed is above a minimum set by a user, and so on.” This model is able to evaluate the compressed model and ensure it meets specified performance standards. This model will evaluate the original and compressed model using input data to produce an outcome.)
Regarding claim 16, Hasegawa discloses, “wherein the change is at least one of a data ETL change, a library update, a library rollback, a codebase change, a hardware change, a pipeline change, a dataset change or combination thereof.” (Means for Solving the Problems, pp. 11, Col. 4, lines 4- 16; "Further, according to the present invention, either one of the above machine-learning model fraud detection systems is such that, when receiving a test data transmission request from the user apparatus using the test data-trained model, the license management unit reads the test data from the fraud detection data holding unit and transmits the test data including dummy data to the user apparatus, and the fraud determination unit of the model verification unit inputs output data obtained by executing the test data in a user model in the user apparatus, and compares the output data with the output value stored in the fraud detection data holding unit to determine whether the user model is fraudulent or not.", This invention discloses a fraud protection against unauthorized users. When a change in the test data occurs, this system will check the stored trained model with the data change to ensure the change in data is correct and warranted.)
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Asif and Fraser in view of Narayanan et al., (Narayanan et al., “Efficient large-scale language model training on GPU clusters using megatron-LM”, Nov. 2021, hereinafter “Narayanan”).
Regarding claim 11, Asif and Fraser fail to explicitly disclose the limitations of this clams, however, Narayanan discloses, “wherein the machine learning model” (Introduction, pp. 3; “In our experiments, we demonstrate close to linear scaling to 3072 A100 GPUs, with an achieved end-to-end training throughput of 163 teraFLOP/s per GPU (including communication, data processing, and optimization), and an aggregate throughput of 502 petaFLOP/s, on a GPT model [11] with a trillion parameters using mixed precision.” This article discloses a machine learning model which contains at least a trillion parameters. This model can be used as an example model to be compressed by Asif. As stated in Asif any different types of models can be input into this that model for compression. The model in Narayanan can be tested using the testing suite in Fraser as well Fraser states: “EvoSuite fully automatically produces these test suites for individual classes or entire projects, without requiring complicated manual steps. The tool is freely available, and can be used on the command line, as a plugin to the Eclipse development platform, or through a web interface.”, which means this testing suite can be applied to whole projects which would include a model similar to GPT-3 introduced in Narayanan.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Asif, Fraser and Narayanan. Asif teaches a machine learning model compression technique where they are able to generate a compressed version of an initial machine learning model and compare the original and compressed models for similar performance. Fraser teaches automatic unit test generation for partial or complete machine learning models. Narayanan teaches a complex machine learning model GPT-3 which contains over a trillion parameters. One of ordinary skill would have motivation to combine a system that is able to compress and test machine learning models and a system that is able to automatically generate unit tests for different machine learning models to improve accuracy of stored models with an example machine learning model which contain a lot of parameters, “Method 100 further comprises acquiring a model at operation 104. Operation 104 may include, for example, downloading a machine learning model from a repository of models that are known to be accurate. The model acquired at operation 104 may be one of multiple different types of models (e.g., CNNs, RNNs, etc.).” (Asif, Detailed Description, pp. 3, [0036]) and “EvoSuite fully automatically produces these test suites for individual classes or entire projects, without requiring complicated manual steps. The tool is freely available, and can be used on the command line, as a plugin to the Eclipse development platform, or through a web interface.” (Fraser, Introduction, pp. 417).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Asif and Fraser in view of Lui and Narayanan.
Regarding claim 20, Lui discloses, “wherein each of the unit tests is a soft unit test or a hard unit test, wherein the unit tests are configured to detect a deviation in behavior,” (Detailed Description, pp. 12, [0059], lines 20- 26; "Perhaps one of the simplest ways to illustrate testing ML systems is through least square linear regression. The purpose of this section is to prepare describe the unit testing ML problem. Applicants begin by looking at how this can be done for linear least squares. When a linear model doesn't learn from the training dataset, it is typically due to two reasons: 1 (model problem). the model learning code is not implemented correctly; 2. (data problem) the dataset's (X, Y) contains no learnable information (e.g. the dataset contains no learnable pattern or the data is improperly processed).", This system is designed to test multiple different points in the code using a different granularity of unit tests. The model is tested in and the data set is tested.)
Asif, Fraser and Lui fail to explicitly disclose, “wherein the machine learning model comprises at least a trillion parameters.”.
However, Narayanan discloses, “wherein the machine learning model comprises at least a trillion parameters.” (Introduction, pp. 3; “In our experiments, we demonstrate close to linear scaling to 3072 A100 GPUs, with an achieved end-to-end training throughput of 163 teraFLOP/s per GPU (including communication, data processing, and optimization), and an aggregate throughput of 502 petaFLOP/s, on a GPT model [11] with a trillion parameters using mixed precision.” This article discloses a machine learning model which contains at least a trillion parameters. This model can be used as an example model to be compressed by Asif. This model can be tested using the testing suite in Fraser as well Fraser states: “EvoSuite fully automatically produces these test suites for individual classes or entire projects, without requiring complicated manual steps. The tool is freely available, and can be used on the command line, as a plugin to the Eclipse development platform, or through a web interface.”, which means this testing suite can be applied to whole projects which would include a model similar to GPT-3 introduced in Narayanan.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Asif, Fraser, Lui and Narayanan. Asif teaches a machine learning model compression technique where they are able to generate a compressed version of an initial machine learning model and compare the original and compressed models for similar performance. Fraser teaches automatic unit test generation for partial or complete machine learning models. Lui teaches a machine learning model which is able to monitor and evaluate other machine learning models different testing techniques. Narayanan teaches a complex machine learning model GPT-3 which contains over a trillion parameters. One of ordinary skill would have motivation to combine a system that is able to compress and test machine learning models and a system that is able to automatically generate unit tests for different machine learning models to improve accuracy of stored models and a machine learning system which is able to detect drifting machine learning models using different testing techniques with an example machine learning model which contains trillions of parameters, “Method 100 further comprises acquiring a model at operation 104. Operation 104 may include, for example, downloading a machine learning model from a repository of models that are known to be accurate. The model acquired at operation 104 may be one of multiple different types of models (e.g., CNNs, RNNs, etc.).” (Asif, Detailed Description, pp. 3, [0036]) and “EvoSuite fully automatically produces these test suites for individual classes or entire projects, without requiring complicated manual steps. The tool is freely available, and can be used on the command line, as a plugin to the Eclipse development platform, or through a web interface.” (Fraser, Introduction, pp. 417).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL MICHAEL GALVIN-SIEBENALER whose telephone number is (571)272-1257. The examiner can normally be reached Monday - Friday 8AM to 5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAUL M GALVIN-SIEBENALER/Examiner, Art Unit 2147
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147