Prosecution Insights
Last updated: April 19, 2026
Application No. 18/466,629

METHOD AND SYSTEM FOR LIGHTENING MODEL FOR OPTIMIZING TO EQUIPMENT- FRIENDLY MODEL

Non-Final OA §101§103§112
Filed
Sep 13, 2023
Examiner
LEVEL, BARBARA HENRY
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Nota Inc.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
236 granted / 330 resolved
+16.5% vs TC avg
Strong +27% interview lift
Without
With
+26.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
16 currently pending
Career history
346
Total Applications
across all art units

Statute-Specific Performance

§101
17.2%
-22.8% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 330 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This correspondence is responsive to the Application filed on September 13, 2023 . Claims 1-20 are pending in the case, with claims 1, 12,15 and 16 in independent form. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in KR (KR10-2023-0095333) on July 21, 2023 . It is noted, however, that applicant has not filed a certified copy of the KR (KR10-2023-0095333) application as required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on February 5, 2026 is being considered by the examiner. The information disclosure statement filed February 5, 2026 cited NPL for the Office Action issued January 30, 2026 in corresponding Korean Application 10-2023-0095333 , but the NPL document submitted by Applicant includes no English translation. However, the Examiner was able to access and consider the English translation of the NPL Office Action issued January 30, 2026 in corresponding Korean Application 10-2023-0095333 through the Global Dossier. Summary of Detailed Action Claim s 5 and 12 are objected to regarding informalities. Claim s 1 2-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite . Claim 13 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite . Claim 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite . Claims 1-4, 7, 10, 12-13, 15-20 rejecte d under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim s 1, 7 , 12 -13 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. in view of Samek et al. Claim Objections Claim s 5 and 12 are objected to because of the following informalities: Claim 5, line 2, change “ cpmpressed ” to “ c o mpressed . Claim 12, line 5, incorrect grammar “ of the second model to which unstructured pruning is already applied ” should be “ of the second model to which unstructured pruning is already has already been applied ” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim s 12-14 are r ejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Independent c laim 12 recites processing inference for input data using “ a first model compressed by determining a filter for applying structured pruning among filters of a second model based on criteria and sparsity for each filter of the second model to which unstructured pruning is already applied and by removing the determined filter from the second model .” It is not at all clear how a first model is compressing by structured pruning of a second model to which unstructured pruning has already been applied. For examination purposes, claim 12 is interpreted as processing inference for input data using first model ( first model to which unstructured pruning is applied to become a second model to which unstructured pruning is already applied ) compressed by structured pruning based on criteria and sparsity for each filter of the second model to which structured pruning is already applied and by removing the determined filter from the second model . Applicant may cancel the claim 12 or amend claim 12 to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 13 and 14 depend from claim 12 and are rejected for the same reasons discussed above with respect to their parent claim 12. Claim 13 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 13 recites the limitation "the model". There is insufficient antecedent basis for this limitation in the claim. Claim 13 depends from claim 12, which only recites a first model and a second model. Claim 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 20 depends from claim 16 and recites to “determine a filter for value transfer based on the criteria” among filters included in the pruning target, for the layer. It is not clear what “a filter for value transfer” means. It is further unclear where value(s) would be transferred to or how value(s) would be transferred, or when value(s) would be transferred, or which value(s) would be transferred or if all value(s) are transferred. It is yet further unclear how value(s) for transfer are based on criteria, much less how to determine a filter for value transfer based on the criteria. Applicant may cancel the claim 20 or amend claim 20 to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim s 1 -4 , 7, 10, 12-13, 15 - 20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) a model compression method , the model compression method comprising deriving criteria and sparsity for each filter of the model , determining a filter for applying structured pruning among filters of the model based on the criteria and the sparsity , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III). This judicial exception is not integrated into a practical application and he claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claims 1- 20 recite one of the four statutory categories of patent able subject matter and belong to the statutory class(es) of a process (method claims 1-14 ), a machine (system/apparatus claims 16-20) , and an article of manufacture (non-transitory computer readable media claims 15 ). Claim 1 recites a method, thus a process, one of the four statutory categories of patentable subject matter. However, claim 1 further recites A model compression method the model compression method comprising: deriving, criteria and sparsity for each filter of the model; determining, a filter for applying structured pruning among filters of the model based on the criteria and the sparsity , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III). The claim does not include any additional elements which integrate the abstract idea into a practical application since the additional elements consist of: of a computer device comprising at least one processor, (an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). receiving, by the at least one processor, a model to which unstructured pruning is applied (An additional element of extra-solution activity that courts have identified is well understood, routine and conventional activity for receiving or transmitting data over a network, e.g., using the internet to gather data. See also, MPEP 2106.05(d)(II), MPEP 2106.05(g), 2019 Guidance, 84 FR 50 at 55, 2019 Guidance, 84 FR 50, footnote 31.). by the at least one processor (an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). generating , a compressed model by applying the structured pruning to the model based on the determined filter (an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). Thus, the claim is directed to the abstract idea. Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more, and transmitting data over a network is well-understood, routine and conventional (MPEP 2106.05(d), and the combination of additional elements does not provide an inventive concept. Thus, the claim is ineligible. Claim 2, dependent on claim 1, recites only additional mental processes for wherein the determining comprises: generating a first list of filters for each layer by ordering filters included in a corresponding layer based on the criteria for each of layers included in the model; and generating a second list of filters for each layer by ordering filters included in the corresponding layer based on the sparsity for each of the layers , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III). Claim 3, dependent on claim 2, recites only additional mental processes for wherein the determining comprises: excluding, from a final pruning target, a filter that is excluded from a pruning target based on both the criteria and the sparsity among the filters of the first list and the second list for the same layer of the model; and determining, as the final pruning target, a filter that is set as the pruning target based on both the criteria and the sparsity among the filters of the first list and the second list for the same layer of the model , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III). Claim 4, dependent on claim 3, recites only additional mental processes for wherein the determining further comprises determining, as filters for value transfer, a first filter that is excluded from the pruning target based on the criteria and determined as the pruning target based on the sparsity and a second filter that is excluded from the pruning target based on the sparsity and determined as the pruning target based on the criteria, among the filters of the first list and the second list for the same layer of the model, and an order of the first filter in the first list and an order of the second filter in the second list are the same , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III). Claim 7, dependent on claim 1, does not include any additional elements which integrate the abstract idea into a practical application since the additional elements consist of: wherein the generating of the compressed model comprises removing the determined filter from the model (an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). Claim 10, dependent on claim 1, recites only additional mental processes for wherein the determining comprises: determining a filter included in a pruning target based on the sparsity among filters included in a layer of the model; and determining a filter for value transfer based on the criteria among filters included in the pruning target, for the layer , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III). Claim 12 recites a method, thus a process, one of the four statutory categories of patentable subject matter. However, claim 1 further recites A n inference m ethod, the inference method comprising compressed by determining a filter for applying structured pruning among filters of a second model based on criteria and sparsity for each filter of the second model to which unstructured pruning is already applied , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III) . The claim does not include any additional elements which integrate the abstract idea into a practical application since the additional elements consist of: of a computer device comprising at least one processor (an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). processing inference for input data using a first model (This additional element amounts to merely the words to “apply it” (or an equivalent) or are mere instructions to implement an abstract idea or other exception on a computer. MPEP 2106.05(f).) Also, this additional element amounts to no more than generally linking the use of the judicial exception to a particular technologic environment or field of use - The application or use of the judicial exception in this manner does not meaningfully limit the claim by going beyond generally linking the use of the judicial exception to a particular technological environment. MPEP 2106.05(h)). by removing the determined filter from the second model (This additional element amounts to merely the words to “apply it” (or an equivalent) or are mere instructions to implement an abstract idea or other exception on a computer. MPEP 2106.05(f).) Also, this additional element amounts to no more than generally linking the use of the judicial exception to a particular technologic environment or field of use - The application or use of the judicial exception in this manner does not meaningfully limit the claim by going beyond generally linking the use of the judicial exception to a particular technological environment. MPEP 2106.05(h)) . Thus, the claim is directed to the abstract idea. Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more, and transmitting data over a network is well-understood, routine and conventional (MPEP 2106.05(d), and generally linking the use of the judicial exception to a particular technological field of use does not meaningfully limit the claims (MPEP 2106.04(d)) and the combination of additional elements does not provide an inventive concept. Thus, the claim is ineligible. Claim 13, dependent on claim 12, recites only additional mental processes for wherein a filter that is determined as a pruning target based on each of the criteria and the sparsity for the same layer of the model is removed from the second model , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III) . Claim 15 recites a non-transitory computer-readable recording medium , thus an article of manufacture , one of the four statutory categories of patentable subject matter . However, claim 15 further recites to perform the method of claim 1 which recites A model compression method the model compression method comprising: deriving, criteria and sparsity for each filter of the model; determining, a filter for applying structured pruning among filters of the model based on the criteria and the sparsity , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III). The claim does not include any additional elements which integrate the abstract idea into a practical application since the additional elements consist of: storing instructions that when executed by a processor, cause the processor to perform ( A dditional element of extra-solution activity that courts have identified is well understood, routine and conventional activity for receiving or transmitting data over a network, e.g., using the internet to gather data. See also, MPEP 2106.05(d)(II), MPEP 2106.05(g), 2019 Guidance, 84 FR 50 at 55, 2019 Guidance, 84 FR 50, footnote 31. Also an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). of a computer device comprising at least one processor, (an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). receiving, by the at least one processor, a model to which unstructured pruning is applied (An additional element of extra-solution activity that courts have identified is well understood, routine and conventional activity for receiving or transmitting data over a network, e.g., using the internet to gather data. See also, MPEP 2106.05(d)(II), MPEP 2106.05(g), 2019 Guidance, 84 FR 50 at 55, 2019 Guidance, 84 FR 50, footnote 31.). by the at least one processor (an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). generating , a compressed model by applying the structured pruning to the model based on the determined filter (an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). Thus, the claim is directed to the abstract idea. Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more, and transmitting data over a network is well-understood, routine and conventional (MPEP 2106.05(d), and the combination of additional elements does not provide an inventive concept. Thus, the claim is ineligible. Claim 16, recites c ompute r device, thus a machine one of the four statutory categories of patentable subject matter . However, claim 16 further recites derive criteria and sparsity for each filter of the model to which unstructured pruning is already applied, determine a filter for applying structured pruning among filters of the model based on the criteria and the sparsity , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III). The claim does not include any additional elements which integrate the abstract idea into a practical application since the additional elements consist of: A computer device comprising: at least one processor configured to execute computer-readable instructions, wherein the at least one processor is configured to (an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). receive a model to which unstructured pruning is applied (An additional element of extra-solution activity that courts have identified is well understood, routine and conventional activity for receiving or transmitting data over a network, e.g., using the internet to gather data. See also, MPEP 2106.05(d)(II), MPEP 2106.05(g), 2019 Guidance, 84 FR 50 at 55, 2019 Guidance, 84 FR 50, footnote 31.). generate a compressed model by applying the structured pruning to the model based on the determined filter (an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). Thus, the claim is directed to the abstract idea. Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more, and transmitting data over a network is well-understood, routine and conventional (MPEP 2106.05(d), and the combination of additional elements does not provide an inventive concept. Thus, the claim is ineligible. Claim 1 7 , dependent on claim 1 6 , recites additional mental processes for wherein, to determine the filter for applying the structured pruning, generate a first list of filters for each layer by ordering filters included in a corresponding layer based on the criteria for each of layers included in the model, and generate a second list of filters for each layer by ordering filters included in a corresponding layer based on the sparsity for each of the layers , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III) . The claim does not include any additional elements which integrate the abstract idea into a practical application since the additional elements consist of: the at least one processor is configured to (an addition al element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). Claim 1 8 , dependent on claim 1 7 , recites additional mental processes for wherein, to determine the filter for applying the structured pruning, exclude, from a final pruning target, a filter that is excluded from a pruning target based on both the criteria and the sparsity among the filters of the first list and the second list for the same layer of the model, and determine, as the final pruning target, a filter that is set as the pruning target based on both the criteria and the sparsity among the filters of the first list and the second list for the same layer of the model , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III) . The claim does not include any additional elements which integrate the abstract idea into a practical application since the additional elements consist of: the at least one processor is configured to (an addition al element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). Claim 1 9 , dependent on claim 1 8 , recites additional mental processes f or wherein, to determine the filter for the structured pruning, determine, as filters for value transfer, a first filter that is excluded from the pruning target based on the criteria and determined as the pruning target based on the sparsity and a second filter that is excluded from the pruning target based on the sparsity and determined as the pruning target based on the criteria, among the filters of the first list and the second list for the same layer of the model, and an order of the first filter in the first list and an order of the second filter in the second list are the same , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III) . The claim does not include any additional elements which integrate the abstract idea into a practical application since the additional elements consist of: the at least one processor is configured to (an addition al element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). Claim 20 , dependent on claim 16, wherein, to determine the filter for the structured pruning , determine a filter included in a pruning target based on the sparsity among filters included in a layer of the model, and determine a filter for value transfer based on the criteria among filters included in the pruning target, for the layer , which are mental processes or concepts that can be performed in the human mind, including observation, evaluation, judgment or opinion, or by a human using pen and paper. MPEP 210604(a)(2)(III) . The claim does not include any additional elements which integrate the abstract idea into a practical application since the additional elements consist of: the at least one processor is configured to (an addition al element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See also, MPEP 2106.05(f), MPEP 2106.04(d), 2019 Guidance, 84 FR 50 at 55, footnote 30.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 , 7 , 12 -13 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (C N 117197524 A, filed July 6, 2023) hereinafter Wang in view of Samek et al. (Pub. No. US 2022/0114455 A1, published April 14, 2022) hereinafter Samek . Regarding claim 1, Wang teaches: A model compression method of a computer device comprising at least one processor , the model compression method comprising (i.e., Aiming at said technical problem, the invention claims an image classification method of lightweight network struc ture based on pruning, which provides a new pruning rule to compress the network by combining the lightweight model MobileNetV3 with the structured pruning ( A model compression method ) . The method combines the sparse value of each filter with the BN layer scaling factor so as to judge the importance of the whole chann el, namely the weight W. Wang, Content of the Invention, sections 1-2. page 2, 3-5. ) : receiving, by the at least one processor , a model to which unstructured pruning is applied (i.e., An image classification method based on lightweight network structure of pruning, wherein The method comprises the following steps: step one: loading the pre-trained model ( receiving a model ) for image classification based on MobileNetV3; Wang, Content of the Invention, sections 1-2. page 2, 3-5.); deriving, by the at least one processor , criteria and sparsity for each filter of the model (i.e., s tep two: calculating the importance index of the filter contained in each convolution layer by using the filter sparsity formula and the BN layer zooming coefficient ( deriving criteria and sparsity for each filter (calculating and deriving importance criteria and sparsity formula for each filter) of the model ); taking each convolution layer as a unit, trimming the filter con tained in the convolution layer based on the importance index of the filter, and deleting the filter lower than the preset importance index in the convolution layer; trimming each winding layer in the model so as to obtain the whole model after pruning; Wang, Content of the Invention, sections 1-2 , page s 2, 3-5 ; section 3, pages 5-6 . ) ; determining, by the at least one processor , a filter for applying structured pruning among filters of the model based on the criteria and the sparsity (i.e., the invention claims an image classification method of lightweight network structure based on pruning, which provides a new pruning rule t o compress the network by combining the lightweight model MobileNetV3 with the structured pruning. Wang, Content of the Invention, sections 1-2, pages 2, 3-5; (1) The pruning in the present method evaluates the importance of the filter by combining the sparse value parameter of the convolution layer filter and the parameter y of the BN layer, and prunes the network according to the importance ( determining a filter for applying structured pruning among filters of the model based on the criteria (importance criteria) and the sparsity (sparse value parameter of the filter ). Specifically, the method uses two factors to calculate the importance of the filter and performs pruning operations based on the importance. Analysis from the following two factors: Firstly, the sparse value parameter of the convolution layer filter represents the proportion of the non-zero element in the filter, which can be used as an index for measuring the importance of the filter. When the sparse value of a filter is low, it means that the filter extracts more useful characteristic information in the input data, so it has high importance; Next, the parameter y of the BN layer is the scaling factor of the filter, representing the scaling adjustment of the filter to the output feature map. The larger gamma value means that the filter has a greater contribution to the output and therefore has a higher importance. By combining these two factors, the importance of the filter can be comprehensively evaluated. Then, the network is trimmed according to the importance, and the filter satisfying the pruning requirement and the corresponding scaling coefficient are trimmed so as to reduce the size of the network parameter and the characteristic image. Finally, a more compact network model is obtained, with higher operating efficiency and lower storage requirements . W ang, Content of the invention, section 3, page s 5-6, sections 1-2, page s 2, 3-5. ) ; and generating, by the at least one processor, a compressed model by applying the structured pruning to the model based on the determined filter (i.e., the invention claims an image classification method of lightweight network structure based on pruning, which provides a new pruning rule to compress the network by combining the lightweight model MobileNetV3 with the structured pruning. Wang, Content of the Invention, sections 1-2, page s 2, 3-5; (1) The pruning in the present method evaluates the importance of the filter by combining the sparse value parameter of the convolution layer filter and the parameter y of the BN layer, and prunes the network according to the importance. Specifically, the method uses tw o factors to calculate the importance of the filter and performs pruning operations based on the importance ( generating a compressed model by applying the structured pruning to the model based on the determined filter ). Analysis from the following two factors: Firstly, the sparse value parameter of the convolution layer filter represents the proportion of the non-zero element in the filter, which can be used as an index for measuring the importance of the filter. When the sparse value of a filter is low, it means that the filter extracts more useful characteristic information in the input data, so it has high importance; Next, the parameter y of the BN layer is the scaling factor of the filter, representing the scaling adjustment of the filter to the output feature map. The larger gamma value means that the filter has a greater contribution to the output and therefore has a higher importance. By combining these two factors, the importance of the filter can be comprehensively evaluated. Then, the network is trimmed according to the importance, and the filter satisfying the pruning requirement and the corresponding scaling coefficient are trimmed so as to reduce the size of the network parameter and the characteristic image ( generating a compressed model by applying the structured pruning to the model based on the determined filter ). Finally, a more compact network model is obtained, with higher operating efficiency and lower storage requirements. Wang, Content of the invention, sectio n 3, page s 5-6, sections 1-2, page s 2, 3-5. ) . Thus, as discussed above, Wang teaches a model compression method, the model compression method comprising: receiving a model to which; deriving criteria and sparsity for each filter of the model; determining a filter for applying structured pruning among filters of the model based on the criteria and the sparsity; and generating a compressed model by applying the structured pruning to the model based on the determined filter. Wang does not specifically disclose “ of a computer device comprising at least one processor ” and a model “ to which unstructured pruning is applied . ” However, Samek teaches in the field related to concepts for pruning and/or quantizing machine learning predictors . Samek, para 2. Samek, which is analogous to the claimed invention because Samek is directed to pruning a machine learning model, teaches a computer device comprising at least one processor in that Samek discloses that the apparatus 30 comprises … a processor 36 for performing the actual pruning and/or quantization. Samek, Fig 2, para 67, 93. Samek teaches that, For both the structured and unstructured pruning, a po st-hoc complementary unstructured and structured pruning step has been described to further optimize/reduce the model structure without affecting its functionality, based on preceding structured and unstructured pruning steps ( a model to which unstructured pruning is applied ). Samek, para 132. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the structured pruning of Wang using the computer device comprising at least one processor and a model to which unstructured pruning is applied of Samek, with a reasonable expectation of success, in order to have a concept at hand which renders pruning and/or quantizing machine learning predictors or, alternatively speaking machine learning models more efficient such as more efficient in terms of conservation of inference quality with reducing, concurrently, computational inference complexity, complexity of describing or storing the parameterization of the respective machine learning predictor, or which even improves the inference quality for a certain task at hand and/or for a certain local input data statistic and to further optimize/reduce the model structure without affecting its functionality . Samek, para 14,132. This would have provided the advantages of improving efficiency and accuracy of a machine learning model. Regarding claim 7 , which depends from claim 1 and recites: wherein the generating of the compressed model comprises removing the determined filter from the model. Wang in view of Samek teaches the method of claim 1, including the generating the compressed model and the determined filter . Wang teaches generating a compressed model by applying the structured pruning to the model based on the determined filter. Wang, Content of the invention, sectio n 3, page s 5-6, sections 1-2, page s 2, 3-5. Wang suggests and implies removing the determined filter from the model by disclosing pruning and trimming the determined filter from the model. Wang, Content of the invention, sectio n 3, page s 5-6, sections 1-2, page s 2, 3-5. Wang does not explicitly disclose “ removing ” the determined filter from the model. However, Samek teaches that, Several use cases for meaningful model pruning shall be briefly presented below. 1. Pruning for model compression: The goal of model compression, i.e. the minimization of its description length, can be obtained by pruning away non-essential elements of the model. Here, a combination of structured and unstructured pruning approaches might be sensible, for the removal of whole filters ( removing the determined filter from the model ) or parts of weight matrices. Samek, para 121, 132. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the structured pruning of Wang using the computer device comprising at least one processor and a model to which unstructured pruning and is applied and removing the determined filter from the model of Samek, with a reasonable expectation of success, in order to have a concept at hand which renders pruning and/or quantizing machine learning predictors or, alternatively speaking machine learning models more efficient such as more efficient in terms of conservation of inference quality with reducing, concurrently, computational inference complexity, complexity of describing or storing the parameterization of the respective machine learning predictor, or which even improves the inference quality for a certain task at hand and/or for a certain local input data statistic and to further optimize/reduce the model structure without affecting its functionality . Samek, para 14,132. This would have provided the advantages of improving efficiency and accuracy of a machine learning model. Regarding claim 12 , Wang teaches An inference method of a computer device comprising at least one processor, the inference method comprising (i.e., An image classification method ( a n inference (image classification inference) method ) based on lightweight network structure of pruning, wherein The method comprises the following steps: step one: loading the pre-trained model for image classification based on MobileNetV3;Wang, Content of the Invention, sections 1-2, pages 1-2 , section 3 pages 5-6. ) : processing inference for inp ut data using a first model c ompressed by determining a filter for applying structured pruning among filters of a second model based on criteria and sparsity for each filter of the second model to which unstructured pruning is already applied and by removing the determined filter from the second model (i.e., Aiming at said technical problem, the invention claims an image classification method of lightweight network structure based on pruning, which provides a ne w pruning rule to compress the network by combining the lightweight model MobileNetV3 with the structured pruning ( processing inference for input data using a first model (processing image classification inference for input data using a model), compressed by applying structured pruning ) …. An image classification method based on lightweight network structure of pruning, wherein The method comprises the following steps: step one: loading the pre-trained model for image classification based on MobileNetV3; Wang, Content of the Invention, sections 1-2, pages 1-2, section 3 pages 5-6. step two: calculating the importance index of the filter contained in each convolution layer by using the filter sparsity formula and the BN layer zooming coefficient ( compressed by determining a filter for applying structured pruning among filters of a model based on criteria and sparsity for each filter of the model (compressed by determining a filter for applying structured pruning among filters of a model based on calculat ing importance criteria and sparsity formula for each filter of the model )); taking each convolution layer as a unit, trimming the filter contained in the convolution layer based on the importance index of the filter, and deleting the filter lower than the preset importance index in the con volution layer; trimming each winding layer in the model so as to obtain the whole model after pruning; Wang, Content of the Invention, sections 1-2, page s 2, 3-5; section 3, pages 5-6. (1) The pruning in the present method evaluates the importance of the filter by combining the sparse value parameter of the convolution layer filter and the parameter y of the BN layer, and prunes the network according to the importance. Specifically, the method uses two factors to calculate the importance of the filter and performs pruning operations based on the importance ( compressed by determining a filter for applying structured pruning among filters of a model based on criteria and sparsity for each filter of the model (compressed by determining a filter for applying structured pruning among filters of a model based on ca lculat ing importance criteria and sparsity formula for each filter of the model ) ). Analysis from the following two factors: Firstly, the sparse value parameter of the convolution layer filter represents the proportion of the non-zero element in the filter, which can be used as an index for measuring the importance of the filter. When the sparse value of a filter is low, it means that the filter extracts more useful characteristic information in the input data, so it has high importance; Next, the parameter y of the BN layer is the scaling factor of the filter, representing the scaling adjustment of the filter to the output feature map. The larger gamma value means that the filter has a greater contribution to the output and therefore has a higher importance. By combining these two factors, the importance of the filter can be comprehensively evaluated. Then, the network is trimm ed according to the importance, and the filter satisfying the pruning requirement and the corresponding scaling coefficient are trimmed so as to reduce the size of the network parameter and the characteristic image ( compressed by determining a filter for applying structured pruning among filters of a model based on criteria and sparsity for each filter of the model and by pruning and trimming the determined filter from the model (compressed by determining a filter for applying structured pruning among filters of a model based on calculat ing importance criteria and sparsity formula for each filter of the model ) Finally, a more compact network model is obtained, with higher operating efficiency and lower storage requirements. Wang, Content of the invention, sectio n 3, page s 5-6, sections 1-2, page s 2, 3-5. ) . The Examiner notes that, as discussed above with respect to the rejection of claim 12 as being indefinite, claim 12 is interpreted for purposes of examination as processing inference for input data using first model (which becomes second model with application of unstructured pruning) compressed by structured pruning based on criteria and sparsity for each filter of the second model to which structured pruning is already applied and by removing the determined filter from the second model. Thus, as discussed above, Wang teaches a n inference method, the inference method comprising processing inference for input data using a model compressed by determining a filter for applying structured pruning among filters of a model based on criteria and sparsity for each filter of the model and by pruning and trimming the determined filter from the model . Wang suggests and implies removing the determined filter from the model by disclosing pruning and trimming the determined filter from the model. Wang, Content of the invention, sectio n 3, page s 5-6, sections 1-2, page s 2, 3-5. Wang does not explicitly disclose of a computer device comprising at least one processor, a second model to which unstructured pruning is already applied and by “ removing ” the determined filter from the second model. However, Samek teaches in the field related to concepts for pruning and/or quantizing machine learning predictors . Samek, para 2. Samek, which is analogous to the claimed invention because Samek is directed to pruning a machine learning model, teaches a computer device comprising at least one processor in that Samek discloses that the apparatus 30 comprises … a processor 36 for performing the actual pruning and/or quantization. Samek, Fig 2, para 67, 93. Samek teaches that, For both the structured and unstructured pruning, a po st-hoc complementary unstructured and structured pruning step has been described to further optimize/reduce the model structure without affecting its functionality, based on preceding structured and unstructured pruning steps ( a second model to which unstructured pruning is already applied ). Samek, para 132. Seve ral use cases for meaningful model pruning shall be briefly presented below. 1. Pruning for model compression: The goal of model compression, i.e. the minimization of its description length, can be obtained by pruning away non-essential elements of the model. Here, a combination of structured and unstructured pruning approaches might be sensible, for the removal of whole filters ( removing the determined filter from the second model ) or parts of weight matrices. Samek, para 121, 132. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the structured pruning of Wang using the computer device comprising at least one processor and a second model to which unstructured pruning is already applied and removing the determined filter from the second model of Samek, with a reasonable expectation of success, in order to have a concept at hand which renders pruning and/or quantizing machine learning predictors or, alternatively speaking machine learning models more efficient such as more efficient in terms of conservation of inference quality with reducing, concurrently, computational inference complexity, complexity of describing or storing the parameterization of the respective machine learning predictor, or which even improves the inference quality for a certain task at hand and/or for a certain local input data statistic and to furt
Read full office action

Prosecution Timeline

Sep 13, 2023
Application Filed
Mar 20, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602907
DATA SENSITIVITY ESTIMATION
2y 5m to grant Granted Apr 14, 2026
Patent 12596963
Machine-Learning Based Record Processing Systems
2y 5m to grant Granted Apr 07, 2026
Patent 12579467
DECENTRALIZED CROSS-NODE LEARNING FOR AUDIENCE PROPENSITY PREDICTION
2y 5m to grant Granted Mar 17, 2026
Patent 12567000
SYSTEMS AND METHODS FOR SUBSCRIBER-BASED ADAPTATION OF PRODUCTION-IMPLEMENTED MACHINE LEARNING MODELS OF A SERVICE PROVIDER USING A TRAINING APPLICATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561399
COMPUTER SYSTEM AND DATA ANALYSIS METHOD
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
98%
With Interview (+26.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 330 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month