Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Remarks
This Office Action is responsive to Applicants' Amendment filed on 10/30/2025, in which claims 1, 4-7, 10, 11 and 13-15 are amended. Claims 3 and 12 are newly cancelled. Claims 16-20 are newly added.
Claims 1, 2, 4-11, and 13-20 are currently pending.
Response to Arguments
With regards to the rejections of claims 14 and 15 under 35 U.S.C. 112(b) as indefinite, Applicant has amended claim 14 to resolve the previously noted source of indefiniteness and thus the rejections are withdrawn.
With regards to the rejections of claims 1-15 under 35 U.S.C. 101 as directed towards abstract ideas, Applicant’s arguments that the claims are eligible have been considered but are not found persuasive.
Applicant first argues that at least claim 1 as amended is eligible at Step 2A, Prong One because it does not recite any abstract ideas. Applicant states on page 7 of the Remarks: “’controlling, with the processor, a machine learning model structure based on the inferencing level to control apparatus power consumption related to a processing load of the machine learning model structure,’ as recited in amended Claim 1 is not the equivalent of human mental work, and, as such, is not a mental process, as alleged by the Office”.
While Examiner acknowledges that a human cannot mentally control a machine learning model structure, Examiner considers that controlling a machine learning model structure based on an inferencing level to be mere instructions to apply the mental process of determination, due to the paucity of details on how the controlling functions, apart from that it is based on a determined inferencing level. Claim 1 as amended still recites a mental process of evaluation within the limitation of determining, with a processor, an inferencing level based on an environmental condition related to the input, as elaborated on in the rejections under 35 U.S.C. 101 below.
Applicant further argues that at least independent claims 1, 11, and 14 are eligible at Step 2A, Prong Two, by integrating any recited abstract ideas into a practical application. Applicant states on page 8 of the Remarks: “Claim 1, as amended, recites, in part, the following: ‘receiving, with a processor, an input;’ ‘determining, with a processor, an inferencing level based on an environmental condition related to the input;’ and ‘executing, with the processor, an inference function with respect to the input using the controlled machine learning model structure to provide an inferencing result.’ Applicant respectfully asserts that at least these features of amended Claim 1 integrate the alleged abstract idea into a practical application”. Applicant further states similar limitations are found in claims 11 and 14, and on page 9 of the Remarks provides several improvements offered by the invention found in the specification, such as improved power efficiency and increased battery life while maintaining inference accuracy.
Examiner respectfully disagrees that claim 1 is eligible at Step 2A, Prong Two of the Subject Matter Eligibility Test. Although Examiner does not dispute that the invention provides the described improvements, Examiner notes that MPEP 2106.04(d).III. states “Because a judicial exception alone is not eligible subject matter, if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application”, i.e. that an improvement provided solely by improving an abstract idea itself is ineligible. Examiner further notes that determining, with a processor, an inferencing level based on an environmental condition related to the input is identified as reciting an abstract idea and thus cannot integrate into a practical application for the reasons provided. Other limitations within claim 1, with analogs in claims 11 and 14, recite limitations that do not integrate into a practical application because they recite mere extra-solution activity or mere instructions to apply, see MPEP 2106.04(d).I.
Applicant further argues that at least independent claims 1, 11, and 14 are eligible at Step 2B, by reciting elements that are significantly more than any recited abstract ideas. Applicant states on page 10 of the Remarks that the claims recite “(1) improvements to the functioning of a computer; and (2) improvements to any other technology or technical field” and are thus eligible according to MPEP 2106.05.
Examiner respectfully disagrees. According to MPEP 2106.05.I.A., claim elements identified as being mere instructions to apply a judicial exception or being mere extra-solution activity do not amount to significantly more than any recited abstract ideas. As detailed in the rejections under 35 U.S.C. 101 below, all claim elements that are not abstract ideas fall into one of these two ineligible categories.
With regards to the rejections of claims 1-4, 8, 9, and 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lee et al. (Korean Patent No. 102029852), Applicant’s arguments that the claims as amended overcome the rejections are found partially persuasive.
Applicant first argues that claim 1 as amended overcomes the existing rejection under 35 U.S.C. 102(a)(1), and Examiner respectfully disagrees. Applicant states on page 11 of the Remarks that “Lee, including the cited portions thereof, makes no mention of determining an inferencing level and controlling a machine learning model structure based on the inferencing level…selecting a number network layers, as Lee discloses, is not the same as determining an inference level based on an environmental condition”, and later on page 11 of the Remarks that “Lee, including the cited portions thereof, make no mention of executing an inference function using a controlled machine learning model structure to provide an inferencing result”.
Examiner respectfully disagrees. Applicant’s specification states: [0010] “Inferencing is the application of a trained machine learning model”, a definition in accordance with the understanding of one of ordinary skill in the art. With this in mind, Lee’s disclosure of (Lee [0006]) “an object recognition device and method capable of efficiently reducing power consumption by selecting a neural network model corresponding to an environmental index calculated based on an image captured of the environment outside a vehicle” corresponds to controlling a machine learning model structure based on an inferencing level. Likewise, Lee’s disclosure of (Lee [0015]) “selecting a neural network model corresponding to the selected number of layers from among the plurality of neural network models; and recognizing an object located in front using the selected neural network model” with (Lee [0045]) “At this time, multiple neural network models are pre-learned” corresponds to performing an inferencing function with a controlled machine learning model, which provides an inferencing result. Although Lee does not explicitly recite “inferencing”, one of ordinary skill in the art, as well as Applicant according to their definition, understands inferencing to be application of a trained machine learning model, and the use of Lee’s trained machine learning models, which are controlled based on the environment, for object recognition corresponds to inferencing. Therefore the rejection of claim 1 under 35 U.S.C. 102(a)(1) is maintained.
Applicant further argues that claim 11 as amended overcomes the existing rejection under 35 U.S.C. 102(a)(1), in part because Lee does not recite “executing, with the processor, an inference function with respect to the input using the controlled machine learning model structure to provide an inferencing result”. Examiner acknowledges that claim 11 as amended overcomes the rejection, however the arguments are moot in light of a new rejection under 35 U.S.C. 103, necessitated by the amendments to the claim. Nevertheless, Examiner disagrees that Lee does not recite an inference function, for the reasons described with respect to the rejection of claim 1.
With regards to the rejections of claims 14 and 15 as unpatentable over Lee et al. (Korean Patent No. 102029852) in view of Shtrom et al. (U.S. Patent Application Publication No. 2019/0375422), Applicant asserts that claim 14 as amended overcomes the rejection because (Remarks Pg. 14) “Lee, including the cited portions thereof, makes no mention of an environmental to an illumination condition and a pose of an object, as recited in amended Claim 14. None of the cited prior art cures the deficiencies of Lee”.
Examiner respectfully disagrees. Lee recites an illumination condition, although Examiner acknowledges that Lee does not recite the pose of an object anywhere. However, Shtrom, previously cited to teach other limitations in claim 14, discloses: (Shtrom [0071]) “control engine 210 may determine whether the object is two dimensional (such as a sign) or three dimensional (such as a person or an animal). Then, based on the determined environmental condition and/or information associated with the object, control engine 210 (and/or sensor 114 or sensor 116, respectively) may perform a remedial action”. According to Applicant’s specification: [0071] “Examples of environmental conditions may include…a pose condition (e.g., object position, object pose, pixel location, measured depth, distance to an object, three-dimensional (3D) object position, object rotation, camera pose, target object zone, etc.)”. Examiner considers whether an object is 2D or 3D to correspond to measured depth, and under the definition of object pose set forth in the specification, to correspond to object pose.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a No therefor, subject to the conditions and requirements of this title.
Claims 1-2,4-11 and 13-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more.
Regarding claim 1,
Step 1 - “Is the claim to a process, machine, manufacture or composition of matter?”
Yes, the claim is directed towards a process.
Step 2A, Prong 1 - “Is the claim directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea?”:
The limitation of determining, with a processor, an inferencing level based on an environmental condition related to the input; recites an evaluation of an inferencing level based on observation of environmental conditions, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
Step 2A, Prong 2 - “Does the claim recite additional elements that integrate the judicial exception into a practical application?”:
The limitation of receiving, with a processor, an input; recites the mere extra-solution activity of data gathering, which does not integrate the exception into a practical application, MPEP 2106.05(d) and 2106.05(g).
The limitation of controlling, with the processor, a machine learning model structure based on the inferencing level to control apparatus power consumption related to a processing load of the machine learning model structure; recites mere instructions to apply the determination of an inferencing level to control an apparatus and its power consumption, which does not integrate the recited judicial exceptions into a practical application, MPEP 2106.05(d) and 2106.05(f).
The limitation of executing, with the processor, an inference function with respect to the input using the controlled machine learning model structure to provide an inferencing result recites mere instructions to apply an inference function using a controlled machine learning model structure, which does not integrate the recited judicial exceptions into a practical application, MPEP 2106.05(d) and 2106.05(f).
Step 2B - “Does the claim recite additional elements that amount to significantly more than the judicial exception?”:
The limitation of receiving, with a processor, an input; recites receiving data over a network, which is well-understood, routine, and conventional, MPEP 2106.05(d).II.i.
The limitation of controlling, with the processor, a machine learning model structure based on the inferencing level to control apparatus power consumption related to a processing load of the machine learning model structure; recites mere instructions to apply the determination of an inferencing level to control an apparatus and its power consumption, which is not significantly more than the recited the recited judicial exceptions, MPEP 2106.05(f).
The limitation of executing, with the processor, an inference function with respect to the input using the controlled machine learning model structure to provide an inferencing result recites mere instructions to apply an inference function using a controlled machine learning model structure, which is not significantly more than the recited the recited judicial exceptions, MPEP 2106.05(f).
Therefore, claim 1 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 2,
Claim 2 adds the additional limitation to claim 1:
The limitation of further comprising detecting the environmental condition, wherein the environmental condition is based on illumination or pose recites an observation of the environment, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
Therefore, claim 2 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 4,
Claim 4 adds the additional limitation to claim 1:
The limitation of wherein determining the inferencing level is based on an inverse relationship between an illumination condition and the inferencing level recites an evaluation of an illumination condition and an inferencing level, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
Therefore, claim 4 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 5,
Claim 5 adds the additional limitation to claim 1:
The limitation of wherein controlling the machine learning model structure comprises dropping a random selection of machine learning model components based on the inferencing level, recites mere instructions to apply the recited judicial exceptions to drop randomly selected machine learning model components, MPEP 2106.05(d) and 2106.05(f).
The limitation of wherein an amount of machine learning model components included in the random selection of machine learning model components is based on the inferencing level recites an evaluation of an amount of machine learning model components based on an inferencing level, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
Therefore, claim 5 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 6,
Claim 6 adds the additional limitation to claim 1:
The limitation of wherein controlling the machine learning model structure comprises selecting a sub-network of machine learning model components based on the inferencing level, wherein the sub-network of machine learning model components reduces the processing load and provides a target accuracy recites a judgement of what sub-network of machine learning model components to select, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
Therefore, claim 6 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 7,
Claim 7 adds the additional limitation to claim 1:
The limitation of wherein controlling the machine learning model structure comprises controlling quantization for at least one layer of the machine learning model structure based on a target accuracy and the inferencing level, in light of the specification ([0041] “Quantization is representing a quantity with a discrete number. For example, quantization may refer to a number of bits utilized to represent a number. In some examples, quantization may be utilized to reduce a number of bits utilized to represent a number”) recites a mathematical relationship, which is a mathematical concept, which is an abstract idea.
Therefore, claim 7 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 8,
Claim 8 adds the additional limitation to claim 1:
The limitation of further comprising receiving an indication of the environmental condition recites the mere extra-solution activity of data gathering, which does not integrate the exception into a practical application, MPEP 2106.05(d) and 2106.05(g), and which recites receiving data over a network, which is well-understood, routine, and conventional, MPEP 2106.05(d).II., example (i) of WURC computer functions.
Therefore, claim 8 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 9,
Claim 9 adds the additional limitation to claim 8:
The limitation of wherein controlling the machine learning model structure comprises selecting a machine learning model from a machine learning model ensemble based on the indication recites a judgement of which machine learning model to select, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
Therefore, claim 9 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 10,
Claim 10 adds the additional limitations to claim 1:
The limitation of determining error feedback based on execution of the inference function; recites an evaluation of the inferencing, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
The limitation of controlling the machine learning model structure based on the error feedback recites mere instructions to apply the recited judicial exceptions to control a machine learning model structure, MPEP 2106.05(d) and 2106.05(f).
Therefore, claim 10 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 11,
Step 1 - “Is the claim to a process, machine, manufacture or composition of matter?”
Yes, the claim is directed towards a machine.
Step 2A, Prong 1 - “Is the claim directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea?”:
The limitation of determine an environmental condition based on an input recites an evaluation of environmental conditions, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
The limitation of determine an inferencing level based on the environmental condition and error feedback wherein the error feedback is an average error over a number of inferences; recites an evaluation of an inferencing level based on evaluation of environmental conditions and error feedback, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
Step 2A, Prong 2 - “Does the claim recite additional elements that integrate the judicial exception into a practical application?”:
The limitation of a memory; recites mere instructions to apply judicial exceptions with generic computer components, MPEP 2106.05(d) and 2106.05(f).
The limitation of a processor in electronic communication with the memory, wherein the processor is to: recites mere instructions to apply judicial exceptions with generic computer components, MPEP 2106.05(d) and 2106.05(f).
The limitation of modify a complexity of a machine learning model structure based on the inferencing level to regulate apparatus power consumption; recites mere instructions to apply the determination of an inferencing level to modify a machine learning model and its power consumption, which does not integrate the recited judicial exceptions into a practical application, MPEP 2106.05(d) and 2106.05(f).
The limitation of and execute an inference function with respect to the input using the controlled machine learning model structure to provide an inferencing result recites mere instructions to apply an inference function using a controlled machine learning model structure, which does not integrate the recited judicial exceptions into a practical application, MPEP 2106.05(d) and 2106.05(f).
Step 2B - “Does the claim recite additional elements that amount to significantly more than the judicial exception?”:
The limitation of a memory; recites mere instructions to apply judicial exceptions with generic computer components, MPEP 2106.05(f).
The limitation of a processor in electronic communication with the memory, wherein the processor is to: recites mere instructions to apply judicial exceptions with generic computer components, MPEP 2106.05(f).
The limitation of modify a complexity of a machine learning model structure based on the inferencing level to regulate apparatus power consumption; recites mere instructions to apply the determination of an inferencing level to modify a machine learning model and its power consumption, which is not significantly more than the recited the recited judicial exceptions, MPEP 2106.05(f).
The limitation of and execute an inference function with respect to the input using the controlled machine learning model structure to provide an inferencing result recites mere instructions to apply an inference function using a controlled machine learning model structure, which is not significantly more than the recited the recited judicial exceptions, MPEP 2106.05(f).
Therefore, claim 11 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 13,
Claim 13 adds the additional limitation to claim 11:
The limitation of wherein the input is captured by a sensor after the machine learning model structure is trained recites the mere extra-solution activity of data gathering, which does not integrate the exception into a practical application, MPEP 2106.05(d) and 2106.05(g), and which recites receiving data over a network, which is well-understood, routine, and conventional, MPEP 2106.05(d).II., example (i) of WURC computer functions.
Therefore, claim 13 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 14,
Step 1 - “Is the claim to a process, machine, manufacture or composition of matter?”
Yes, the claim is directed towards a manufacture.
Step 2A, Prong 1 - “Is the claim directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea?”:
The limitation of code to cause a processor to determine an environmental condition indicative of a signal to-noise ratio to be experienced by a sensor; recites an evaluation of environmental conditions, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
The limitation of code to cause the processor to map the environmental condition to an inferencing level, wherein the environmental condition relates to an illumination condition and a pose of an object; recites an evaluation of an inferencing level based on observation of environmental conditions, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
Step 2A, Prong 2 - “Does the claim recite additional elements that integrate the judicial exception into a practical application?”:
The limitation of code to cause the processor to control machine learning model components based on the inferencing level; recites mere instructions to apply the determination of an inferencing level to control machine learning model components, which does not integrate the recited judicial exceptions into a practical application, MPEP 2106.05(d) and 2106.05(f).
The limitation of and code to cause the processor to execute an inference function using the controlled machine learning model components to provide an inferencing result related to the object recites mere instructions to apply an inference function using controlled machine learning model components, which does not integrate the recited judicial exceptions into a practical application, MPEP 2106.05(d) and 2106.05(f).
Step 2B - “Does the claim recite additional elements that amount to significantly more than the judicial exception?”:
The limitation of code to cause the processor to control machine learning model components based on the inferencing level; recites mere instructions to apply the determination of an inferencing level to control machine learning model components, which is not significantly more than the recited the recited judicial exceptions, MPEP 2106.05(f).
The limitation of and code to cause the processor to execute an inference function using the controlled machine learning model components to provide an inferencing result related to the object recites mere instructions to apply an inference function using controlled machine learning model components, which is not significantly more than the recited the recited judicial exceptions, MPEP 2106.05(f).
Therefore, claim 14 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 15,
Claim 15 adds the additional limitations to claim 14:
The limitation of wherein the code to cause the processor to control the machine learning model components comprises code to cause the processor to remove a first subset of the machine learning model components recites mere instructions to apply the recited judicial exceptions to remove machine learning model components, MPEP 2106.05(d) and 2106.05(f).
Therefore, claim 15 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 16,
Claim 16 adds the additional limitations to claim 14:
The limitation of wherein the code to cause the processor to control the machine learning model components comprises code to cause the processor to select a second subset of the machine learning model components recites a judgement of what machine learning model components to select, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
Therefore, claim 16 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 17,
Claim 17 adds the additional limitations to claim 14:
The limitation of wherein the code to cause the processor to control the machine learning model components comprises code to cause the processor to select a quantization for the machine learning model components based on the inferencing level recites a judgement of what quantization to select, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer.
Therefore, claim 17 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 18,
Claim 18 adds the additional limitations to claim 11:
The limitation of wherein the inference function relates to object detection and the inference result is an object detection result recites mere instructions to apply the recited judicial exceptions for object detection, MPEP 2106.05(d) and 2106.05(f).
Therefore, claim 18 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 19,
Claim 19 adds the additional limitations to claim 11:
The limitation of wherein the inference function relates to image classification and wherein the inference result is an image classification result recites mere instructions to apply the recited judicial exceptions for image classification, MPEP 2106.05(d) and 2106.05(f).
Therefore, claim 19 is found to be ineligible subject matter under 35 U.S.C. 101.
Regarding claim 20,
Claim 20 adds the additional limitations to claim 11:
The limitation of wherein the inference function relates to voice recognition and the inference result is a voice recognition result recites mere instructions to apply the recited judicial exceptions for voice recognition, MPEP 2106.05(d) and 2106.05(f).
The limitation of and wherein the processor is to perform a command based on a recognized voice of the voice recognition result recites mere instructions to apply the recited judicial exceptions to perform a command, MPEP 2106.05(d) and 2106.05(f).
Therefore, claim 20 is found to be ineligible subject matter under 35 U.S.C. 101.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 4, 8, and 9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lee et al. (Korean Patent No. 102029852), hereinafter Lee.
Regarding claim 1,
Lee teaches A method, comprising:
receiving, with a processor, an input; ((Lee [0007]) “the present invention includes an image acquisition unit for acquiring an image from a camera that photographs an environment outside a vehicle”)
determining, with a processor, an inferencing level based on an environmental condition related to the input; ((Lee [0006]) “The technical problem to be achieved by the present invention is to provide an object recognition device and method capable of efficiently reducing power consumption by selecting a neural network model corresponding to an environmental index calculated based on an image captured of the environment outside a vehicle”, an environmental index that is used to select a neural network model corresponds to an inferencing level based on an environmental condition)
controlling, with the processor, a machine learning model structure based on the inferencing level to control apparatus power consumption related to a processing load of the machine learning model structure; ((Lee [0016]) “according to the present invention, when the weather conditions are good, a neural network model with a small number of layers can be used to reduce power consumption”, (Lee [0078]) “Here, the neural network model has different power consumption and recognition rates depending on the number of layers. That is, the fewer layers a neural network model has, the less power it consumes. As the number of layers in a neural network model increases, its complexity increases, so while the recognition rate improves, the amount of power consumed also increases”, a complexity of a neural network model based on its layers corresponds to a processing load of the machine learning model structure)
and executing, with the processor, an inference function ((Lee [0045]) “At this time, multiple neural network models are pre-learned”, a pre-learned neural network model is an inference function) with respect to the input using the controlled machine learning model structure to provide an inferencing result ((Lee [0015]) “a method for recognizing an object through a selected neural network model using an object recognition device includes the steps of:…selecting a neural network model corresponding to the selected number of layers from among the plurality of neural network models; and recognizing an object located in front using the selected neural network model”, recognizing an object with a selected machine learning model corresponds to providing an inferencing result using an input and a controlled machine learning model structure)
Regarding claim 2,
Lee teaches The method of claim 1,
Lee further teaches:
further comprising detecting the environmental condition, ((Lee [0007]) “the present invention includes an image acquisition unit for acquiring an image from a camera that photographs an environment outside a vehicle”)
wherein the environmental condition is based on illumination or pose ((Lee [0007]) “the present invention includes…a brightness value calculation unit for calculating a brightness value at a current point in time from the photographed image”, a brightness value corresponds to illumination)
Regarding claim 4,
Lee teaches The method of claim 1,
Lee further teaches:
wherein determining the inferencing level is based on an inverse relationship between an illumination condition and the inferencing level ((Lee [0027) “drivers driving vehicles have difficulty securing a field of vision when the lighting is dazzlingly bright (when the average brightness value is 240 to 255 in Table 1) or when the illuminance is very low (when the average brightness value is 0 to 10 in Table 1), so it is given a level of 10, which represents the WORST. Additionally, when the weather conditions are very good (average brightness value is 90 to 160 in Table 1), it is indicated as level 1, which indicates BEST”, an illuminance level that lowers as brightness increases from 10 to 90 corresponds to an inverse relationship between the illumination condition and the inferencing level)
Regarding claim 8,
Lee teaches The method of claim 1,
Lee further teaches:
further comprising receiving an indication of the environmental condition ((Lee [0015]) “a method for recognizing an object through a selected neural network model using an object recognition device includes the steps of: acquiring an image from a camera that captures an external environment of a car; calculating a brightness value at a current point in time from the captured image; training a classification model using images of rain, snow, and fog; classifying a current environmental state by applying the image at the current point in time to the classification model for which training has been completed”, acquiring an image that includes information such as brightness, and the presence of rain, snow, or fog corresponds to receiving an indication of the environmental condition)
Regarding claim 9,
Lee teaches The method of claim 8,
Lee further teaches:
wherein controlling the machine learning model structure comprises selecting a machine learning model from a machine learning model ensemble based on the indication ((Lee [0015]) “a method for recognizing an object through a selected neural network model using an object recognition device includes the steps of:…storing a plurality of neural network models having different numbers of layers;… selecting the number of layers corresponding to the calculated environmental index; selecting a neural network model corresponding to the selected number of layers from among the plurality of neural network models;”, a plurality of neural networks corresponds to a machine learning model ensemble)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Venkatesha et al. (U.S. Patent Application Publication No. 2020/0104716), hereinafter Venkatesha.
Regarding claim 5,
Lee teaches The method of claim 1,
Venkatesha teaches the following further limitations that Lee does not teach:
wherein controlling the machine learning model structure comprises dropping a random selection of machine learning model components ((Venkatesha [0006]) “a processor implemented method includes identifying a plurality of connections in a neural network that is pre-associated with a deep learning model, generating a plurality of pruned neural networks by pruning different sets of one or more of the plurality of connections to respectively generate each of the plurality of pruned neural networks”, (Venkatesha [0010]) “The pruning of the different sets of the one or more of the plurality of connections may include selecting, at random, respective combinations of two or more connections for pruning”, pruning connections within a neural network corresponds to dropping a random selection of machine learning model components) based on the [inferencing] level, ((Venkatesha [0049]) “respective pruning of the plurality of prunable connections may be performed based on predetermined pruning policies 107…the predetermined pruning policies 107 may include, without limiting to, a predetermined time period for which one or more of pruned and/or masked connections of one or more of the pruned neural networks 109 are maintained in the pruned/masked state, and/or a threshold number of connections which are to be pruned”, a set time period or threshold corresponds to a level, Lee teaches a level for inferencing)
wherein an amount of machine learning model components included in the random selection of machine learning model components is based on the [inferencing] level ((Venkatesha [0049]) “the threshold number of connections for pruning may be 30% of the total number of connections in the neural network 103”, Lee teaches a level for inferencing)
At the time of filing, one of ordinary skill in the art would have motivation to combine Lee and Venkatesha by taking the method for controlling a machine learning model structure to reduce apparatus power consumption based on an inferencing level based on an environmental condition, taught by Lee, and having controlling the machine learning model structure encompass dropping randomly selected machine learning model components, taught by Venkatesha, as random pruning is a well-known technique within the art for creating more efficient neural network structures, imparting the predictable benefit of increasing the accuracy to resource usage ratio of the neural network. Such a combination would be obvious.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Kang et al. “DMS: Dynamic Model Scaling for Quality-Aware Deep Learning Inference in Mobile and Embedded Devices”, hereinafter Kang.
Regarding claim 6,
Lee teaches The method of claim 1,
Kang teaches the following further limitations more explicitly than Lee:
wherein controlling the machine learning model structure comprises selecting a sub-network of machine learning model components based on the inferencing level, ((Kang Pg. 5) “the computational cost at each convolution layer can be scaled simply by changing the number of active filters and feature maps…Each layer in the table specifies the DMS scaling factor si, which is the ratio of active filters when the pruning is applied to the layer. The scaling factor is determined so that each layer in the table yields equal amount of savings via pruning. In Task_Table, each task maintains its own DMS level as an index to DMS_Table. The DMS_level indicates how many convolution layers will be pruned during the task’s inference”, a convolutional neural network with fewer active filters in its layers is a sub-network of machine learning model components)
wherein the sub-network of machine learning model components reduces the processing load and provides a target accuracy ((Kang Pg. 5) “If the task wants to decrease the computation cost either for further energy saving or for reducing the latency, it might increase its DMS_level at runtime. Conversely, if the task needs full inference accuracy, its DMS_level can be set to 0, as task #2”, each DMS_level in Kang has a corresponding sub-network, a computation cost is a processing load)
At the time of filing, one of ordinary skill in the art would have motivation to combine Lee and Kang by taking the method for controlling a machine learning model structure to reduce apparatus power consumption based on an inferencing level based on an environmental condition, taught by Lee, and having controlling the machine learning model structure encompass selecting a sub-network of machine learning model components, taught by Kang, as it is well-known within the art that convolutional neural networks with more filters in their convolutional layers are more complex, requiring additional memory and computation time for inference, and thus higher power consumption, and so reducing the number of filters within the neural network by using only a sub-network imparts the predictable benefit of reducing memory usage, computation time, and power consumption. Such a combination would be obvious.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Liu et al. (U.S. Patent Application Publication No. 2025/0053485), hereinafter Liu.
Regarding claim 7,
Lee teaches The method of claim 1,
Liu teaches the following further limitation that Lee does not teach:
wherein controlling the machine learning model structure comprises controlling quantization for at least one layer of the machine learning model structure ((Liu [0153]) “The following example is that the data to be quantized is the neurons and the weights of a target layer in the neural network”) based on a target accuracy and the [inferencing] level ((Liu [0262] “Optionally, the preset condition may be a preset threshold set by a user”, (Liu [0803]-[0804]) “the data bit width determination unit configured to determine the target data bit width corresponding to the current verify iteration according to the quantization error is specifically configured to:…increase the data bit width corresponding to the current verify iteration to obtain the target data bit width corresponding to the current verify iteration if the quantization error is greater than or equal to the first preset threshold;” ((Liu [1099]) “quantization precision refers to the size of an error between data after quantization and data before quantization. The quantization precision may affect the accuracy of the computation results of the neural network. The higher the quantization precision is, the higher the accuracy of the computation results will be”), a preset threshold for quantization error corresponds to a level with a target accuracy, Lee teaches a level for inferencing)
At the time of filing, one of ordinary skill in the art would have motivation to combine Lee and Liu by taking the method for controlling a machine learning model structure to reduce apparatus power consumption based on an inferencing level based on an environmental condition, taught by Lee, and having controlling the machine learning model structure encompass controlling quantization, taught by Liu, as Liu teaches: (Liu [0018]) “The data bit width is used by an artificial intelligence processor to quantize data involved in the process of the neural network operation and convert high-precision data into low-precision fixed-point data, which may reduce storage space of data involved in the process of neural network operation…Smaller data storage space enables neural network deployment to occupy smaller space, thus the on-chip memory of an artificial intelligence processor chip may accommodate more data, which may reduce memory access data in the artificial intelligence processor chip and improve computation performance”. Such a combination would be obvious.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Garcia Satorras et al. (U.S. Patent Application Publication No. 2020/0285962), hereinafter Garcia Satorras.
Regarding claim 10,
Lee teaches The method of claim 1, further comprising:
Garcia Satorras teaches the following further limitations that Lee does not teach:
determining error feedback based on execution of the inferencing function; ((Garcia Satorras Abstract) “The system and method may iteratively infer the state by, in an iteration, obtaining an initial inference of the state using a mathematical model representing a prior knowledge-based modelling of the state, and by applying a learned model to the initial inference of the state and the sensor measurement, wherein the learned model has been learned to minimize an error between initial inferences provided by the mathematical model and a ground truth and to provide a correction value as output for correcting the initial inference of the state of the mathematical model”)
and controlling the machine learning model structure based on the error feedback ((Garcia Satorras Abstract) “the learned model has been learned to minimize an error between initial inferences provided by the mathematical model and a ground truth and to provide a correction value as output for correcting the initial inference of the state of the mathematical model”, the model providing a correction value to correct initial inferences by a mathematical model based on error feedback corresponds to controlling a machine learning model based on error feedback)
At the time of filing, one of ordinary skill in the art would have motivation to combine Lee and Garcia Satorras by taking the method for controlling a machine learning model structure to reduce apparatus power consumption based on an environmental condition, including performing inferencing, taught by Lee, and determining error feedback based on the inferencing and further controlling the model based on the feedback, taught by Garcia Satorras, as adjusting a machine learning model in response to feedback indicating an erroneous prediction is very well-known within the art and imparts the predictable benefit of increasing the future accuracy of the machine learning model. Such a combination would be obvious.
Claims 11, 13, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Kang, further in view of Cohen et al. (U.S. Patent Application Publication No. 2019/0236447), hereinafter Cohen.
Regarding claim 11,
Lee teaches An apparatus, comprising:
a memory; ((Lee [0007]) “the present invention includes…a storage unit for storing a plurality of neural network models having different numbers of layers”, a storage unit that stores neural network models corresponds to a memory)
a processor in electronic communication with the memory, wherein the processor is to: ((Lee [0007]) “a control unit for calculating an environmental index using the grade value for the brightness at the current point in time and the grade value for the environmental state, and selecting the number of layers corresponding to the calculated environmental index, a neural network model selection unit for selecting a neural network model corresponding to the selected number of layers from among the plurality of neural network models”, a control unit and a neural network model selection unit corresponds to a processor in communication with memory)
determine an environmental condition based on an input; ((Lee [0015]) “a method for recognizing an object through a selected neural network model using an object recognition device includes the steps of: acquiring an image from a camera that captures an external environment of a car; calculating a brightness value at a current point in time from the captured image; training a classification model using images of rain, snow, and fog; classifying a current environmental state by applying the image at the current point in time to the classification model for which training has been completed”, determining an environmental state based on an acquired image that includes information such as brightness, and the presence of rain, snow, or fog corresponds to receiving an indication of the environmental condition)
determine an inferencing level based on the environmental condition… ((Lee [0007]) “a control unit for calculating an environmental index using the grade value for the brightness at the current point in time and the grade value for the environmental state”)
modify a complexity of a machine learning model structure based on the inferencing level to regulate apparatus power consumption; ((Lee [0016]) “according to the present invention, when the weather conditions are good, a neural network model with a small number of layers can be used to reduce power consumption”, (Lee [0078]) “Here, the neural network model has different power consumption and recognition rates depending on the number of layers. That is, the fewer layers a neural network model has, the less power it consumes. As the number of layers in a neural network model increases, its complexity increases, so while the recognition rate improves, the amount of power consumed also increases”, using a neural network with fewer layers and lower power consumption based on environmental conditions corresponds to modifying complexity of a machine learning model to regulate power consumption)
and execute an inference function ((Lee [0045]) “At this time, multiple neural network models are pre-learned”, a pre-learned neural network model is an inference function) with respect to the input using the controlled machine learning model structure to provide an inferencing result ((Lee [0015]) “a method for recognizing an object through a selected neural network model using an object recognition device includes the steps of:…selecting a neural network model corresponding to the selected number of layers from among the plurality of neural network models; and recognizing an object located in front using the selected neural network model”, recognizing an object with a selected machine learning model corresponds to providing an inferencing result using an input and a controlled machine learning model structure)
Kang teaches the following further limitation that Lee does not teach:
determine an inferencing level ((Kang Pg. 5) “If the task wants to decrease the computation cost either for further energy saving or for reducing the latency, it might increase its DMS_level at runtime. Conversely, if the task needs full inference accuracy, its DMS_level can be set to 0, as task #2”) based on…and error feedback wherein the error feedback is an [average] error ((Kang Pg. 5) “Figure 8 shows a feedback control loop to support a desired inference latency as a QoS goal. In the feedback control loop, the QoS manager requests the DMS manager to adapt the workload of the inference task by ΔW according to the gap between the target latency and the monitored latency. The DMS manager translates ΔW to ΔDMS_level to scale the CNN model…we use a PI (proportional integral) controller that relates the error in latency directly to ΔDMS_level”, Kang does not explicitly teach average error) over a number of inferences ((Kang Pg. 7) “Performance is monitored as the inference task runs continuously over 150 monitoring periods”)
At the time of filing, one of ordinary skill in the art would have motivation to combine Lee and Kang by taking the apparatus for controlling a machine learning model structure to reduce apparatus power consumption based on an inferencing level based on an environmental condition, taught by Lee, and having determination of the inferencing level also be based on error feedback, taught by Kang, as Kang teaches: (Kang Pg. 2) “DMS can be combined with runtime feedback control mechanisms to guarantee applications’ QoS goals. Since DMS scales the computational cost of fully capable models without actually removing pruned filters, it can support different QoS levels for concurrent inference tasks even if they share a single deep learning model”, that is, scaling the models via levels allows for several inference tasks to be performed with varying quality levels based on needs and available resources, allowing inference tasks to be performed flexibly, allowing machine learning applications to be deployed in additional circumstances. Such a combination would be obvious.
Cohen teaches the following further limitation that Lee does not teach and that Kang does not explicitly teach:
wherein the error feedback is an average error ((Cohen [0060]) “an average error equal to the difference between the vectors y, y' is calculated (212) over a plurality of time points and the average is displayed to a human user and/or is compared to a threshold. If (214) the average difference is smaller than the threshold, the predictor 58 is considered ready for operation. Otherwise, one or more parameters of the neural network are adjusted (216), such as the number of layers”)
At the time of filing, one of ordinary skill in the art would have motivation to combine Lee, Kang, and Cohen by taking the apparatus for controlling a machine learning model structure to reduce apparatus power consumption based on an inferencing level based on an environmental condition and error feedback, jointly taught by Lee and Kang, and having the error feedback be an average error, taught by Cohen, as an average error over several time points would be more informative than an error over only a single time point, as anomalous outliers in the error data would be less able to skew responses, increasing the robustness of the response to the error. Such a combination would be obvious.
Regarding claim 13,
Lee, Kang, and Cohen jointly teach The apparatus of claim 11,
Lee further teaches:
wherein the input is captured by a sensor after the machine learning model structure is trained ((Lee [0015]) “a method for recognizing an object through a selected neural network model using an object recognition device includes the steps of: acquiring an image from a camera that captures an external environment of a car;…classifying a current environmental state by applying the image at the current point in time to the classification model for which training has been completed”, a camera is a sensor)
At the time of filing, one of ordinary skill in the art would have motivation to combine the apparatus jointly taught by Lee, Kang, and Cohen for the parent claim of claim 13, claim 11. No new embodiments are introduced, so the reason to combine is the same as for the parent claim.
Regarding claim 18,
Lee, Kang, and Cohen jointly teach The apparatus of claim 11,
Lee further teaches:
wherein the inference function relates to object detection and the inference result is an object detection result ((Lee [0015]) “a method for recognizing an object through a selected neural network model using an object recognition device includes the steps of:…recognizing an object located in front using the selected neural network model”, (Lee [0014]) “The above object recognition unit can recognize at least one of a lane, a type of vehicle, a pedestrian, an animal, and a sign from the captured image”)
At the time of filing, one of ordinary skill in the art would have motivation to combine the apparatus jointly taught by Lee, Kang, and Cohen for the parent claim of claim 18, claim 11. No new embodiments are introduced, so the reason to combine is the same as for the parent claim.
Regarding claim 19,
Lee, Kang, and Cohen jointly teach The apparatus of claim 11,
Lee further teaches:
wherein the inference function relates to image classification ((Lee [0015]) “a method for recognizing an object through a selected neural network model using an object recognition device includes the steps of: acquiring an image from a camera that captures an external environment of a car,…recognizing an object located in front using the selected neural network model”, recognizing an object in front of a car using a camera that acquires an image corresponds to image classification) and wherein the inference result is an image classification result ((Lee [0014]) “The above object recognition unit can recognize at least one of a lane, a type of vehicle, a pedestrian, an animal, and a sign from the captured image”)
At the time of filing, one of ordinary skill in the art would have motivation to combine the apparatus jointly taught by Lee, Kang, and Cohen for the parent claim of claim 19, claim 11. No new embodiments are introduced, so the reason to combine is the same as for the parent claim.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Kang, further in view of Cohen, further in view of Yae (U.S. Patent Application Publication No. 2019/0115015), hereinafter Yae.
Regarding claim 20,
Lee, Kang, and Cohen jointly teach The apparatus of claim 11,
Yae teaches the following further limitations that neither Lee, nor Kang, nor Cohen teach:
wherein the inference function relates to voice recognition and the inference result is a voice recognition result, ((Yae [0029]) “a method for controlling a vehicular voice recognition system for inferring an intention of a user includes: receiving an input instruction of the user; determining whether the input instruction is present in an instruction database; when the input instruction is not present in the instruction database, performing integrated inference…and providing a service defined in the service domain corresponding to the result of the integrated inference”)
and wherein the processor is to perform a command based on a recognized voice of the voice recognition result ((Yae [0080]) “As illustrated in FIG. 3, an instruction is input (S100). Operation S100 may include an operation of converting an instruction uttered by the user to a text”, (Yae [0085]) “Operation S150 may be performed based on the current state of the vehicle. According to embodiments of the present disclosure, because the instruction of 'Play' was recognized in a situation in which a radio is currently turned on, it may be controlled such that an I-pod or USB that may reproduce music instead of a radio may be operated”)
At the time of filing, one of ordinary skill in the art would have motivation to combine Lee, Kang, Cohen, and Yae by taking the apparatus for controlling a machine learning model structure to reduce apparatus power consumption based on an inferencing level based on an environmental condition and average error feedback, jointly taught by Lee, Kang, and Cohen, and having the inferencing relate to voice recognition for performing a command, taught by Yae, as Yae teaches: (Yae [0003]) “Humans use language as a basic means of communication. Nowadays, language is similarly used when humans communicate with devices. As such, machine recognition of natural language is an important topic”, that is, voice recognition is a well-known application of machine learning that provides users a natural and easy way to communicate with and use their devices. Such a combination would be obvious.
Claims 14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Shtrom et al. (U.S. Patent Application Publication No. 2019/0375422), hereinafter Shtrom.
Regarding claim 14,
Lee teaches A non-transitory tangible computer-readable medium storing executable code, comprising: ((Lee [0007]) “the present invention includes…a storage unit for storing a plurality of neural network models having different numbers of layers”)
code to cause the processor to map the environmental condition to an inferencing level, wherein the environmental condition relates to an illumination condition… ((Lee [0027) “drivers driving vehicles have difficulty securing a field of vision when the lighting is dazzlingly bright (when the average brightness value is 240 to 255 in Table 1) or when the illuminance is very low (when the average brightness value is 0 to 10 in Table 1), so it is given a level of 10, which represents the WORST. Additionally, when the weather conditions are very good (average brightness value is 90 to 160 in Table 1), it is indicated as level 1, which indicates BEST”)
and code to cause the processor to control machine learning model components based on the inferencing level; ((Lee [0015]) “a method for recognizing an object through a selected neural network model using an object recognition device includes the steps of:…storing a plurality of neural network models having different numbers of layers;… selecting the number of layers corresponding to the calculated environmental index; selecting a neural network model corresponding to the selected number of layers from among the plurality of neural network models;”)
and code to cause the processor to execute an inference function ((Lee [0045]) “At this time, multiple neural network models are pre-learned”, a pre-learned neural network model is an inference function) using the controlled machine learning model components to provide an inferencing result related to the object ((Lee [0015]) “a method for recognizing an object through a selected neural network model using an object recognition device includes the steps of:…selecting a neural network model corresponding to the selected number of layers from among the plurality of neural network models; and recognizing an object located in front using the selected neural network model”, recognizing an object with a selected machine learning model with selected layers corresponds to providing an inferencing result using controlled machine learning model components)
Shtrom teaches the following further limitations that Lee does not teach or more explicitly than Lee:
code to cause a processor to determine an environmental condition indicative of a signal-to-noise ratio to be experienced by a sensor; ((Shtrom [0071]) “when performing a first measurement of the first sensor information using sensor 114 and/or a second measurement of the second sensor information using sensor 116, control engine 210 (and/or sensor 114 or sensor 116, respectively) may determine an environmental condition (such as light intensity, e.g., a luminance level…based on the determined environmental condition and/or information associated with the object, control engine 210 (and/or sensor 114 or sensor 116, respectively) may perform a remedial action)…control engine 210 may provide one or more signals or instructions…so that selective illumination is output…This constant wavelength illumination may allow the first sensor information and/or the second sensor information to be acquired when the signal-to-noise ratio is low”, the luminance level environmental condition that is changed via illumination to reduce the signal-to-noise ratio corresponds to an environmental condition indicative of a signal-to-noise ratio)
wherein the environmental condition relates to…a pose of an object; (((Shtrom [0071]) “control engine 210 may determine whether the object is two dimensional (such as a sign) or three dimensional (such as a person or an animal). Then, based on the determined environmental condition and/or information associated with the object, control engine 210 (and/or sensor 114 or sensor 116, respectively) may perform a remedial action”), determination of whether an object is 2D or 3D corresponds to measurement of depth, according to Applicant’s specification a pose condition includes measured depth: [0071] “Examples of environmental conditions may include…a pose condition (e.g., object position, object pose, pixel location, measured depth, distance to an object, three-dimensional (3D) object position, object rotation, camera pose, target object zone, etc.)”)
At the time of filing, one of ordinary skill in the art would have motivation to combine Lee and Shtrom by taking the storage medium with instructions to map an illumination condition to an inferencing level and control machine learning model components based on an inferencing level, taught by Lee, and having the inferencing level indicate a signal-to-noise ratio of a sensor and having the inferencing level be based at least partially on object pose, taught by Shtrom, as doing so imparts the predictable benefit of enabling selection of more complex machine learning models that are better able to perform accurate inference with more difficult to classify objects, such as those with greater depth, or with noisy data when the received data is noisy. Such a combination would be obvious.
Regarding claim 16,
Lee and Shtrom jointly teach The computer-readable medium of claim 14,
Lee further teaches:
wherein the code to cause the processor to control the machine learning model components comprises code to cause the processor to select a second subset of the machine learning model components ((Lee [0015]) “a method for recognizing an object through a selected neural network model using an object recognition device includes the steps of:…storing a plurality of neural network models having different numbers of layers;… selecting the number of layers corresponding to the calculated environmental index; selecting a neural network model corresponding to the selected number of layers from among the plurality of neural network models;”, choosing one of several neural network models from a plurality of neural network models with varying numbers of layers corresponds to selecting a subset of machine learning model components)
At the time of filing, one of ordinary skill in the art would have motivation to combine the medium jointly taught by Lee and Shtrom for the parent claim of claim 16, claim 14. No new embodiments are introduced, so the reason to combine is the same as for the parent claim.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Shtrom, further in view of Kang.
Regarding claim 15,
Lee and Shtrom jointly teach The computer-readable medium of claim 14,
Kang teaches the following further limitation more explicitly than Lee and that Shtrom does not teach:
wherein the code to cause the processor to control the machine learning model components comprises code to cause the processor to remove a first subset of the machine learning model components ((Kang Pg. 5) “the computational cost at each convolution layer can be scaled simply by changing the number of active filters and feature maps…Each layer in the table specifies the DMS scaling factor si, which is the ratio of active filters when the pruning is applied to the layer. The scaling factor is determined so that each layer in the table yields equal amount of savings via pruning. In Task_Table, each task maintains its own DMS level as an index to DMS_Table. The DMS_level indicates how many convolution layers will be pruned during the task’s inference”, pruning convolution layers corresponds to removing a subset of machine learning model components)
At the time of filing, one of ordinary skill in the art would have motivation to combine Lee, Shtrom, and Kang by taking the medium for controlling a machine learning model of claim 14, jointly taught by Lee and Shtrom, and including removal of machine learning model components, taught by Kang, as it is well-known within the art that convolutional neural networks with more filters in their convolutional layers are more complex, requiring additional memory and computation time for inference, and thus higher power consumption, and so reducing the number of filters within the neural network imparts the predictable benefit of reducing memory usage, computation time, and power consumption. Such a combination would be obvious.
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Shtrom, further in view of Liu.
Regarding claim 17,
Lee and Shtrom jointly teach The computer-readable medium of claim 14,
Liu teaches the following further limitation that neither Lee nor Shtrom teaches:
wherein the code to cause the processor to control the machine learning model components comprises code to cause the processor to select a quantization for the machine learning model components ((Liu [0153]) “The following example is that the data to be quantized is the neurons and the weights of a target layer in the neural network”) based on the [inferencing] level ((Liu [0262] “Optionally, the preset condition may be a preset threshold set by a user”, (Liu [0803]-[0804]) “the data bit width determination unit configured to determine the target data bit width corresponding to the current verify iteration according to the quantization error is specifically configured to:…increase the data bit width corresponding to the current verify iteration to obtain the target data bit width corresponding to the current verify iteration if the quantization error is greater than or equal to the first preset threshold;”, a preset threshold selected by a user is a level, Lee teaches a level for inferencing)
At the time of filing, one of ordinary skill in the art would have motivation to combine Lee, Shtrom, and Liu by taking the medium for controlling a machine learning model of claim 14, jointly taught by Lee and Kang, and having controlling the machine learning model structure encompass selecting quantization, taught by Liu, as Liu teaches: (Liu [0018]) “The data bit width is used by an artificial intelligence processor to quantize data involved in the process of the neural network operation and convert high-precision data into low-precision fixed-point data, which may reduce storage space of data involved in the process of neural network operation…Smaller data storage space enables neural network deployment to occupy smaller space, thus the on-chip memory of an artificial intelligence processor chip may accommodate more data, which may reduce memory access data in the artificial intelligence processor chip and improve computation performance”. Such a combination would be obvious.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Wang et al. (U.S. Patent Application Publication No. 2019/0050710) teaches a method of adaptively adjusting bit-widths of neural network parameters, i.e. quantizing the parameters.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTOR A NAULT whose telephone number is (703) 756-5745. The examiner can normally be reached M - F, 12 - 8.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/V.A.N./Examiner, Art Unit 2124
/Kevin W Figueroa/Primary Examiner, Art Unit 2124