Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are presented for examination.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 9, 10, 19 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim(s) 9 and 19 each recite(s) “whether a behavior of the model is within the threshold of a behavior of the machine learning model or is similar to the behavior of the compressed model prior to a change in a codebase”. The term “similar” in claim(s) 9 and 19 is a relative term which renders the claim indefinite. The term “similar” is not defined by the claim, the specification does not provide a definite standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention, rendering the claim(s) indefinite.
Claim(s) 10 and 20 each recite(s) “very large machine learning model”. The term “very large” in claim(s) 10 and 20 is a relative term, the terms “very” and “large” in the context of machine learning model is not defined by the claim, the specification does not provide a definite standard for ascertaining the requisite degree, the term "very large" is subjective and can be interpreted to disclose a model of any size, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention, rendering the claim(s) indefinite.
For examination purposes, examiner has interpreted “very large machine learning model” to be “machine learning model”.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1
According to the first part of the analysis, in the instant case, claim(s) 1-10 is/are directed to a method, and claim(s) 11-20 are directed to a product. Thus, each of the claim(s) falls within one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter).
Independent claims:
Step 2A, Prong 1
Following the determination of whether or not the claims fall within one of the four categories (Step 1 ), it must be determined if the claims recite a judicial exception (e.g. mathematical concepts, mental processes, certain methods of organizing human activity) (Step 2A, Prong 1). In this case, the claims are determined to recite a judicial exception as explained below.
Regarding claim(s) 1 and 11 this/these claim(s) recite(s)
selecting layers from a machine learning model,
comparing the metadata from the model with the metadata from the compressed machine learning model; and
determining whether a behavior of the compressed machine learning model is within a threshold value of a behavior of the model based on the comparison.
These steps for selection, comparison and determination appear to be practically implementable in the human mind and is understood to be a recitation of a mental process.
Step 2A, Prong 2
Regarding claim(s) 1 and 11, this judicial exception is not integrated into a practical application.
Regarding claim(s) 11 the judicial exception is not integrated into a practical application. In particular, the claim(s) recite a memory, a processor, for determining whether a behavior of the compressed machine learning model is within a threshold value of a behavior of the model.
The memory, processor, are recited at a high level of generality and recited so generically that they represent no more than mere instructions to apply the judicial exception on a computer (see MPEP 2106.05(f). These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer (see MPEP 2106.05(h)).
Regarding claim(s) 1 and 11 this/these claim(s) further recite(s)
compressing the selected layers to generate a compressed model that corresponds to the machine learning model (This limitation recites compressing model layers, the compression is recited at a high level of generality, i.e. high level recitation of compression of layers of a generic machine learning model. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)), and
generating metadata from a compressed machine learning model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level application of a generic model with generic compressed layers to generate metadata).
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above.
Step 2B
Regarding claim(s) 1 and 11 this/these claim(s) do/does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Regarding claim(s) 1 and 11 this/these claims further recite
compressing the selected layers to generate a compressed model that corresponds to the machine learning model (This limitation recites compressing model layers, the compression is recited at a high level of generality, i.e. high level recitation of compression of layers of a generic machine learning model. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)), and
generating metadata from a compressed machine learning model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level application of a generic model with generic compressed layers to generate metadata).
The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above.
Therefore the claim limitations, taken either alone or in combination, fail to provide an inventive concept. Thus the claims are not patent eligible.
Step 2A, Prong 1 Dependent Claims
Regarding Claim(s) 2 and 12 this/these claims recite comparing the metadata from the machine learning model and the metadata from the compressed machine learning model (These steps appear to be practically implementable in the human mind and is understood to be a recitation of a mental process).
Regarding Claim(s) 5 and 15 this/these claims recite determining a change in a codebase of the machine learning model (These steps appear to be practically implementable in the human mind and is understood to be a recitation of a mental process).
Regarding Claim(s) 7 and 17 this/these claims recite selecting layers from the machine learning model based on a pre-defined rule for a class of the machine learning model (These steps appear to be practically implementable in the human mind and is understood to be a recitation of a mental process).
Regarding Claim(s) 9 and 19 this/these claims recite determining whether a behavior of the compressed model is within a threshold value further comprises determining whether a behavior of the model is within the threshold of a behavior of the machine learning model or is similar to the behavior of the compressed model prior to a change in a codebase (These steps for determining whether behavior is within a threshold, appear to be practically implementable in the human mind and is understood to be a recitation of a mental process).
Step 2A, Prong 2 Dependent Claims
Regarding Claim(s) 2 and 12 this/these claims recite
automatically performing unit tests to test the selected layers that have been compressed in the compressed model...performing the unit tests on the ...information... from the machine learning model and the ...information...from the compressed machine learning model (This limitation recites using unit tests at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)),
metadata from the machine learning model and the metadata from the compressed machine learning model (These limitations appear to be directed to the specification of data to be used for the unit tests, and is understood to be generally linking the use of the judicial exception to a particular technological environment or field of use, which is not indicative of integration into a practical application. MPEP 2106.05(h)).
Regarding Claim(s) 3 and 13, this/these claims recite wherein the unit tests include inner model metric unit tests, output metric unit tests, and/or evolution metric unit tests (This limitation recites using different types of tests at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)).
Regarding Claim(s) 4 and 14 this/these claims recite wherein unselected layers in the machine learning model are unchanged in the compressed model (This limitation recites compressing model layers, the compression is recited at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)
Regarding Claim(s) 5 and 15 this/these claims recite wherein the change is at least one of a data ETL (Extract-Transform-Load) change, a library update, a library rollback, a codebase change, a hardware change, a pipeline change, a dataset change or combination thereof (These limitations appear to be directed to the specification of information that indicate change, and is understood to be generally linking the use of the judicial exception to a particular technological environment or field of use, which is not indicative of integration into a practical application. MPEP 2106.05(h)).
Regarding Claim(s) 6 and 16 this/these claims recite training and validating the compressed model (This limitation recites training and validating a compressed model, the training and validating is recited at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)).
Regarding Claim(s) 7 and 17 this/these claims recite automatically generating the compressed model based on the layers selected by the rule (This limitation recites generating a compressed model based in the abstract idea, the compression is recited at a high level of generality, i.e. high level recitation of compression of layers of a generic machine learning model. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)
Regarding Claim(s) 8 and 18 this/these claims recite wherein subsequent compression operations that select layers that were previously compressed use the previously compressed layers (This limitation recites incrementally compressing model layers, the compression is recited at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)
Regarding Claim(s) 10 and 20 this/these claims recite wherein the machine learning model is a very large machine learning model (This limitation recites using a generic very large machine learning model at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)).
Step 2B Dependent Claims
Regarding Claim(s) 2 and 12 this/these claims recite comparing the metadata from the machine learning model and the metadata from the compressed machine learning model (These steps appear to be practically implementable in the human mind and is understood to be a recitation of a mental process).
Regarding Claim(s) 5 and 15 this/these claims recite determining a change in a codebase of the machine learning model (These steps appear to be practically implementable in the human mind and is understood to be a recitation of a mental process).
Regarding Claim(s) 7 and 17 this/these claims recite selecting layers from the machine learning model based on a pre-defined rule for a class of the machine learning model (These steps appear to be practically implementable in the human mind and is understood to be a recitation of a mental process).
Regarding Claim(s) 9 and 19 this/these claims recite determining whether a behavior of the compressed model is within a threshold value further comprises determining whether a behavior of the model is within the threshold of a behavior of the machine learning model or is similar to the behavior of the compressed model prior to a change in a codebase (These steps for determining whether behavior is within a threshold, appear to be practically implementable in the human mind and is understood to be a recitation of a mental process).
Step 2A, Prong 2 Dependent Claims
Regarding Claim(s) 2 and 12 this/these claims recite automatically performing unit tests to test the selected layers that have been compressed in the compressed model...performing the unit tests on the ...information... from the machine learning model and the ...information...from the compressed machine learning model (This limitation recites using unit tests at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)),
metadata from the machine learning model and the metadata from the compressed machine learning model (These limitations appear to be directed to the specification of data to be used for the unit tests, and is understood to be generally linking the use of the judicial exception to a particular technological environment or field of use, which is not indicative of integration into a practical application. MPEP 2106.05(h)).
Regarding Claim(s) 3 and 13, this/these claims recite wherein the unit tests include inner model metric unit tests, output metric unit tests, and/or evolution metric unit tests (This limitation recites using different types of tests at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)).
Regarding Claim(s) 4 and 14 this/these claims recite wherein unselected layers in the machine learning model are unchanged in the compressed model (This limitation recites compressing model layers, the compression is recited at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)
Regarding Claim(s) 5 and 15 this/these claims recite wherein the change is at least one of a data ETL (Extract-Transform-Load) change, a library update, a library rollback, a codebase change, a hardware change, a pipeline change, a dataset change or combination thereof (These limitations appear to be directed to the specification of information that indicate change, and is understood to be generally linking the use of the judicial exception to a particular technological environment or field of use, which is not indicative of integration into a practical application. MPEP 2106.05(h)).
Regarding Claim(s) 6 and 16 this/these claims recite training and validating the compressed model (This limitation recites training and validating a compressed model, the training and validating is recited at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)).
Regarding Claim(s) 7 and 17 this/these claims recite automatically generating the compressed model based on the layers selected by the rule (This limitation recites generating a compressed model based in the abstract idea, the compression is recited at a high level of generality, i.e. high level recitation of compression of layers of a generic machine learning model. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)
Regarding Claim(s) 8 and 18 this/these claims recite wherein subsequent compression operations that select layers that were previously compressed use the previously compressed layers (This limitation recites incrementally compressing model layers, the compression is recited at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)
Regarding Claim(s) 10 and 20 this/these claims recite wherein the machine learning model is a very large machine learning model (This limitation recites using a generic very large machine learning model at a high level of generality. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 4, 7, 9, 10, 11, 14, 17, 19, 20, is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Fan (US 20240232686 A1).
Regarding claim 1, Fan teaches a method, comprising (Fan [4-6] method to generate compressed model):
selecting layers from a machine learning model (Fan [75] layers to be compressed are selected based on compression scheme(s), Fan [76] only layers selected based on selected compression scheme(s) are compressed in model);
compressing the selected layers to generate a compressed model that corresponds to the machine learning model; (Fan [75] selected layers are compressed based on compression scheme(s), Fan [76] only layers selected based on selected compression scheme(s) are compressed in model);
generating metadata from a compressed machine learning model; comparing the metadata from the model with the metadata from the compressed machine learning model (Fan [67, 68, 71, 79, 83] cost function compares performance metrics (metadata) of compressed vs uncompressed models); and
determining whether a behavior of the compressed machine learning model is within a threshold value of a behavior of the model based on the comparison (Fan [71] parameters specify the amount of acceptable change in performance metrics).
Regarding claim 4, Fan teaches the invention as claimed in claim 1 above.
Fan further teaches wherein unselected layers in the machine learning model are unchanged in the compressed model (Fan [76] only layers selected based on selected compression scheme(s) are compressed).
Regarding claim 7, Fan teaches the invention as claimed in claim 1 above.
Fan further teaches selecting layers from the machine learning model based on a pre-defined rule for a class of the machine learning model and automatically generating the compressed model based on the layers selected by the rule (Fan [75] selected layers are compressed based on compression scheme(s). Fan [65] compression schemes may each be for specific model portions (class)).
Regarding claim 9, Fan teaches the invention as claimed in claim 1 above.
Fan further teaches wherein determining whether a behavior of the compressed model is within a threshold value further comprises determining whether a behavior of the model is within the threshold of a behavior of the machine learning model or is similar to the behavior of the compressed model prior to a change in a codebase (Fan [68, 71] parameters specify the amount of acceptable change in performance metrics between compressed and uncompressed models, Fan [5, 6, 28, 29, 34, 85] models may be implemented as software components and layers may be model components).
Regarding claim 10, Fan teaches the invention as claimed in claim 1 above.
Fan further teaches wherein the machine learning model is a very large machine learning model (Fan [ 24] model may be a large machine model).
Claim 11 is directed towards a medium storing instructions similar in scope to the instructions performed by the method of claim 1, and is rejected under the same rationale. Fan further teaches a non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations (Fan [4-6]).
Claim(s) 14, 17, 19, 20, is/are dependent on claim 11 above, is/are directed towards a medium storing instructions similar in scope to the instructions performed by the method of claim(s) 4, 7, 9, 10 respectively, and is/are rejected under the same rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 3, 12, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Fan (US 20240232686 A1), in view of Fraser “EvoSuite: Automatic Test Suite Generation for Object-Oriented Software” dated 2011.
Fraser was cited in the IDS dated 9/8/2025
Regarding claim 2, Fan teaches the invention as claimed in claim 1 above.
Fan further teaches selected layers that have been compressed in the compressed model.... comparing the metadata from the machine learning model and the metadata from the compressed machine learning model...(Fan [5, 6, 28, 29, 34, 85] models may be implemented as software components (classes) and layers may be model components (classes), Fan [67, 68, 71, 79, 83] cost function compares performance metrics (metadata) of compressed vs uncompressed model, Fan [71] parameters specify the amount of acceptable change in performance metrics).
Fan does not specifically teach automatically performing unit tests to test the selected layers that have been compressed in the compressed model, wherein comparing the metadata from the machine learning model and the metadata from the compressed machine learning model comprises performing the unit tests on the metadata from the machine learning model and the metadata from the compressed machine learning model
However Fraser teaches automatically performing unit tests to test the selected components that have been ...changed in the modified tool..., wherein comparing the metadata from the ... tool .... and the metadata from the ... modified tool... comprises performing the unit tests on the metadata from the ... tool .... and the metadata from the... modified tool... (Fraser Abstract performing unit testing allows adding small and effective sets of assertions that concisely summarize the current behavior, detect deviations from expected behavior, and to capture the current behavior in order to protect against future defects breaking this behavior, Fraser Introduction unit tests may be performed for objects (components) of modified tool using automation, Fraser Section 3, return values for tool objects and modified tool objects are compared to determine failure).
It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have incorporated the concept taught by Fraser of automatically performing unit tests to test the selected components that have been ...changed in the modified tool..., wherein comparing the metadata from the ... tool .... and the metadata from the ... modified tool... comprises performing the unit tests on the metadata from the ... tool .... and the metadata from the... modified tool..., into the invention suggested by Fan; since both inventions are directed towards determining the effect of modifications in components on a tool, and incorporating the teaching of Fraser into the invention suggested by Fan would provide the added advantage of adding small and effective sets of assertions that concisely summarize the current behavior, detect deviations from expected behavior, and to capture the current behavior in order to protect against future defects breaking this behavior, and the combination would perform with a reasonable expectation of success (Fraser Abstract, Introduction, Section 3).
Regarding claim 3, Fan and Fraser teaches the invention as claimed in claim 2 above.
Fan does not specifically teach wherein the unit tests include inner model metric unit tests, output metric unit tests, and/or evolution metric unit tests
However Fraser teaches wherein the unit tests include inner model metric unit tests, output metric unit tests, and/or evolution metric unit tests (Fraser Introduction last para and Sections 2, 3 and 4, tests can evaluate classes, objects, changes in return values and mutation testing, whole test suites can target entire coverage criterion).
Claim(s) 12 and 13 is/are dependent on claim 11 above, is/are directed towards a medium storing instructions similar in scope to the instructions performed by the method of claim(s) 2 and 3 respectively, and is/are rejected under the same rationale.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Fan (US 20240232686 A1), in view of Hasegawa (US 11544352 B2,).
Hasegawa was cited in the IDS dated 3/17/2025.
Regarding claim 5, Fan teaches the invention as claimed in claim 4 above.
Fan does not specifically teach determining a change in a codebase of the machine learning model, wherein the change is at least one of a data ETL (Extract-Transform-Load) change, a library update, a library rollback, a codebase change, a hardware change, a pipeline change, a dataset change or combination thereof.
However Hasegawa teaches determining a change in a codebase of the machine learning model, wherein the change is ... a dataset change... (Hasegawa Col 3 invention detecting changes in model and then determines whether there has been fraud, Hasegawa Col 4, lines 4- 16, change may be data change).
It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have incorporated the concept taught by Hasegawa of determining a change in a codebase of the machine learning model, wherein the change is ... a dataset change, into the invention suggested by Fan; since both inventions are directed towards determining model performance after a change, and incorporating the teaching of Hasegawa into the invention suggested by Fan would provide the added advantage of determining whether there is fraud by analyzing data change, and the combination would perform with a reasonable expectation of success (Hasegawa Col 3, Col 4, lines 4- 16).
Claim(s) 15 is/are dependent on claim 11 above, is/are directed towards a medium storing instructions similar in scope to the instructions performed by the method of claim(s) 5 respectively, and is/are rejected under the same rationale.
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Fan (US 20240232686 A1), in view of Dey (US 20220284293 A1).
Regarding claim 6, Fan teaches the invention as claimed in claim 1 above.
Fan further teaches training ... the compressed model (Fan [78-80] compressed model may be trained).
Fan does not specifically teach ... validating the compressed model
However Dey teaches ... validating the compressed model (Dey [60] compressed model may be validated with test data, resulting in betterment of accuracy and latency over ... handcrafted model)
It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have incorporated the concept taught by Dey of ... validating the compressed model, into the invention suggested by Fan; since both inventions are directed towards generating compressed model, and incorporating the teaching of Dey into the invention suggested by Fan would provide the added advantage of betterment of accuracy and latency over ... handcrafted model, and the combination would perform with a reasonable expectation of success (Dey [60]).
Claim(s) 16 is/are dependent on claim 11 above, is/are directed towards a medium storing instructions similar in scope to the instructions performed by the method of claim(s) 6 respectively, and is/are rejected under the same rationale.
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Fan (US 20240232686 A1), in view of Xie (US 20190370658 A1).
Regarding claim 8, Fan teaches the invention as claimed in claim 1 above.
Fan does not specifically teach wherein subsequent compression operations that select layers that were previously compressed use the previously compressed layers
However Xie teaches wherein subsequent compression operations that select layers that were previously compressed use the previously compressed layers (Xie [19, 24, 26] model may be compressed by incrementally compressing layers and retaining compressed layers from prior increments that satisfy performance criteria).
It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have incorporated the concept taught by Xie of wherein subsequent compression operations that select layers that were previously compressed use the previously compressed layers, into the invention suggested by Fan; since both inventions are directed towards compressing a model by compressing layers, and incorporating the teaching of Xie into the invention suggested by Fan would provide the added advantage of optimizing performance by retaining compression that previously satisfied performance criteria, and the combination would perform with a reasonable expectation of success (Xie [19, 24, 26]).
Claim(s) 18 is/are dependent on claim 11 above, is/are directed towards a medium storing instructions similar in scope to the instructions performed by the method of claim(s) 8 respectively, and is/are rejected under the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANCHITA ROY whose telephone number is (571)272-5310. The examiner can normally be reached Monday-Friday 12-8.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
SANCHITA . ROY
Primary Examiner
Art Unit 2146
/SANCHITA ROY/Primary Examiner, Art Unit 2146