Prosecution Insights
Last updated: April 18, 2026
Application No. 17/708,585

ELECTRONIC DEVICE FOR PROCESSING DATA BASED ON ARTIFICIAL INTELLIGENCE MODEL AND METHOD FOR OPERATING THE SAME

Non-Final OA §101§103
Filed
Mar 30, 2022
Examiner
TRAN, AMY NMN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
5y 2m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
10 granted / 28 resolved
-19.3% vs TC avg
Strong +48% interview lift
Without
With
+47.9%
Interview Lift
resolved cases with interview
Typical timeline
5y 2m
Avg Prosecution
24 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
32.5%
-7.5% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/09/2026 has been entered. Response to Amendment The amendments filed 02/09/2026 has been entered. The status of the claims is as follows: Claims 1-20 remain pending in the application. Claims 1, 8, 12, 16-20 are amended. Response to Arguments In reference to the Claim Rejection under 35 U.S.C 103: Applicant’s arguments, see Remarks pg. 10-16, filed 02/09/2026, with respect to the rejection(s) of claim(s) under 35 U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Shi et al. (US 2017 /0270408 A1). In reference to the Claim Rejection under 35 U.S.C 101: Applicant asserts in Remarks pg. 16-19 that the 101 rejection is improper because the amended claims are not directed to a mere mental process. In particular, Applicant contends that the claims now require calculating costs output from an AI model based on accuracy and energy consumption, detecting specific events such as changes in content or device state, and then selecting values or models in response to those detected events, which Applicant says cannot practically be performed in the human mind, even with pen and paper. Applicant further argues that, even if the claim were considered to involve a judicial exception, the claims integrate that exception into a practical application under Step 2A, Prong 2 because they improve computer functionality by increasing processing efficiency, reducing compute load, and enabling an electronic device to select an AI model, parameter set or processor with optimal efficiency when handling data such as images or audio. Examiner respectfully disagrees and notes that Applicant’s amendments do not overcome the 101 rejection because the amended claim still recites evaluating information and making a selection based on that evaluation, which is a mental process and mathematical concept. In particular, calculating respective costs as a function of accuracy and energy consumption is directed to a mathematical calculation; detecting changes in content or device state and selecting values based on those calculated costs merely amount to observing information, analyzing it using criteria, and choosing an option accordingly. The recited processor, electronic device, content and the artificial intelligence model are described at a high level and perform their ordinary functions and thus do not integrate the judicial exception into a practical application. Any alleged improvement in efficiency or reduced compute load comes from the abstract decision logic itself rather than from a specific technological improvement in the computer or AI model. Applicant’s arguments filed on 02/09/2026 have been fully considered but they are not persuasive. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under U.S.C 101 for containing an abstract idea without significantly more. Regarding claim 1: Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is a process. Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites an abstract idea. selecting first values from among a plurality of values associated with an computation capability to process the obtained at least one content - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) calculating respective costs output from the artificial intelligence model when the values associated with the computation capability is applied to the artificial intelligence model, the costs are calculated as a function of an accuracy of result data and energy consumption of the artificial intelligence model; This limitation is directed to mathematical calculation as it is calculating the costs as a function of an accuracy and energy consumption (see MPEP 2106.04(a)(2) l. C.) detecting an occurrence of a specific event relating to at least one of a change in the at least one content, or a change in a state of the electronic device - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) in response to detecting the occurrence of the specific event, selecting second values from among the plurality of values based on the calculation of the respective costs; and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? No, there are no additional elements that integrate the judicial exception into a practical application. The additional elements: executing, [by at least one processor of the electronic device], an application and obtaining at least one content based on the executed application; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. by at least one processor of the electronic device – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). obtaining first result data by processing the at least one content, [using a first artificial intelligence model having at least one first parameter], obtained by configuring at least one parameter of an artificial intelligence model stored in the electronic device as the at least one first parameter corresponding to the first values; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a first artificial intelligence model having at least one first parameter– This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). obtaining second result data by processing the at least one content, [using a second artificial intelligence model having at least one second parameter], obtained by configuring the at least one parameter of the artificial intelligence model as the at least one second parameter corresponding to the second values. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a second artificial intelligence model having at least one second parameter – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? No, there are no additional elements that amount to significantly more than the judicial exception. The additional elements are: executing, [by at least one processor of the electronic device], an application and obtaining at least one content based on the executed application; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. by at least one processor of the electronic device – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). obtaining first result data by processing the at least one content, [using a first artificial intelligence model having at least one first parameter], obtained by configuring at least one parameter of an artificial intelligence model stored in the electronic device as the at least one first parameter corresponding to the first values; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a first artificial intelligence model having at least one first parameter– This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). obtaining second result data by processing the at least one content, [using a second artificial intelligence model having at least one second parameter], obtained by configuring the at least one parameter of the artificial intelligence model as the at least one second parameter corresponding to the second values. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a second artificial intelligence model having at least one second parameter – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Regarding claim 2, Claim 2 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: wherein the at least one content includes a plurality of images, and wherein the specific event includes a change in the content within the plurality of images, or an identification of a movement of the electronic device. – This limitation merely recites a further limitation on the selecting first values from among a plurality of values associated with an computation capability to process the obtained at least one content from Claim 1 which was directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 3, Claim 3 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: wherein the artificial intelligence model is a model pre-trained to output result data in response to receiving the at least one data obtained based on execution of the application, and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. wherein the at least one parameter of the artificial intelligence model includes at least one weight and at least one activation function obtained according to the training Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Regarding claim 4, Claim 4 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 3 which includes an abstract idea (see rejection for claim 3). The additional limitations: wherein the plurality of values associated with the computation capability include combinations of values each including a value for the weight and a value for the activation function, and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. wherein the value for the weight and the value for the activation function are associated with the computation capability. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Regarding claim 5, Claim 5 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 4 which includes an abstract idea (see rejection for claim 4). The additional limitations: when an event for processing the at least one content obtained based on the executed application occurs, selecting a first combination including a first value for the weight and a first value for the activation function, as the first values, from among the combinations of the values; and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) when the specific event occurs, selecting a second combination including a second value for the weight and a second value for the activation function, as the second values, from among the combinations of the values. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 6, Claim 6 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 5 which includes an abstract idea (see rejection for claim 5). The additional limitations: obtaining the first artificial intelligence model by setting the at least one weight of the artificial intelligence model as at least one first weight based on the first value for the weight and the at least one activation function of the artificial intelligence model as at least one first activation function based on the first value for the activation function; and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. obtaining the second artificial intelligence model by setting the at least one weight of the artificial intelligence model as at least one second weight based on the second value for the weight and the at least one activation function of the artificial intelligence model as at least one second activation function based on the second value for the activation function. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Regarding claim 7, Claim 7 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 6 which includes an abstract idea (see rejection for claim 6). The additional limitations: identifying a first processor corresponding to the first value for the weight and the first value for the activation function among a plurality of processors of the electronic device and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) controlling the first processor to process the at least one content using the first artificial intelligence model; and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. identifying a second processor corresponding to the second value for the weight and the second value for the activation function among the plurality of processors of the electronic device and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) controlling the second processor to process the at least one content using the second artificial intelligence model. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Regarding claim 8, Claim 8 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 5 which includes an abstract idea (see rejection for claim 5). The additional limitations: wherein calculating the respective costs output from the artificial intelligence model comprises: calculating costs for some of the combinations of the values based on a value for the weight and a value for the activation function included in each of some of the combinations of the values during a specific period based on the occurrence of the specific event, - This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) the costs indicating an accuracy of result data obtained when the at least one data is processed based on some of the combinations of the values, and energy consumption obtained when the at least one data is processed based on some of the combinations of the values; and – This limitation merely recites a further limitation on the calculating costs for some of the combinations of the values based on a value for the weight and a value for the activation function included in each of some of the combinations of the values during a specific period based on the occurrence of the specific event which is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) selecting the second combination of the values having a lowest cost among the calculated costs. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 9, Claim 9 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 8 which includes an abstract idea (see rejection for claim 8). The additional limitations: maintaining the processing of the at least one content [using the first artificial intelligence model] when the lowest cost among the calculated costs is a threshold or more. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using the first artificial intelligence model – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Regarding claim 10, Claim 10 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 8 which includes an abstract idea (see rejection for claim 8). The additional limitations: identifying a third combination including a highest value for the weight and a highest value for the activation function among the combinations of the values; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) obtaining a third artificial intelligence model having at least one third parameter configured based on the highest value for the weight and the highest value for the activation function and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. obtaining artificial intelligence models having at least one fourth parameter configured based on values for the weight and values for the activation function included in the some of the combinations of the values; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. obtaining third result data by processing the at least one data based on the third artificial intelligence model during the designated period, and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. obtaining a plurality of result data by processing the at least one data based on the artificial intelligence models; and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. calculating a difference between at least, respective parts of the plurality of result data and at least part of the third result data. - This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) Regarding claim 11, Claim 11 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 10 which includes an abstract idea (see rejection for claim 10). The additional limitations: obtaining information associated with an amount of energy consumed when the at least one data is processed based on each of the plurality of artificial intelligence models; and – This limitation is directed to receiving or transmitting data over a network. The courts have recognized receiving or transmitting data over a network as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (see MPEP 2106.05(d) II.). calculating the costs based on the calculated difference and the consumed amount of energy. - This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) Regarding claim 12: Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is a process. Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites an abstract idea. select first values from among a plurality of values associated with an computation capability to process the obtained at least one content; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) calculating respective costs output from the artificial intelligence model when the values associated with the computation capability is applied to the artificial intelligence model, the costs are calculated as a function of an accuracy of result data and energy consumption of the artificial intelligence model; This limitation is directed to mathematical calculation as it is calculating the costs as a function of an accuracy and energy consumption (see MPEP 2106.04(a)(2) l. C.) detecting an occurrence of a specific event relating to at least one of a change in the at least one content, or a change in a state of the electronic device - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) in response to detecting the occurrence of the specific event, selecting second values from among the plurality of values based on the calculation of the respective costs; and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? No, there are no additional elements that integrate the judicial exception into a practical application. The additional elements: memory storing instructions; This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). wherein the instructions, when executed by at least one processor individually or collectively, cause the electronic device to: This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). execute an application and obtaining at least one content based on the executed application; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. obtain first result data by processing the at least one content, [using a first artificial intelligence model having at least one first parameter], obtained by configuring at least one parameter of an artificial intelligence model stored in the electronic device as the at least one first parameter corresponding to the first values; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a first artificial intelligence model having at least one first parameter– This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). obtain second result data by processing the at least one content, [using a second artificial intelligence model having at least one second parameter], obtained by configuring the at least one parameter of the artificial intelligence model as the at least one second parameter corresponding to the second values. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a second artificial intelligence model having at least one second parameter – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? No, there are no additional elements that amount to significantly more than the judicial exception. The additional elements are: memory storing instructions; This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). wherein the instructions, when executed by at least one processor individually or collectively, cause the electronic device to: This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). execute an application and obtaining at least one content based on the executed application Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. obtain first result data by processing the at least one content, [using a first artificial intelligence model having at least one first parameter], obtained by configuring at least one parameter of an artificial intelligence model stored in the electronic device as the at least one first parameter corresponding to the first values; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a first artificial intelligence model having at least one first parameter– This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). obtain second result data by processing the at least one content, [using a second artificial intelligence model having at least one second parameter], obtained by configuring the at least one parameter of the artificial intelligence model as the at least one second parameter corresponding to the second values. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a second artificial intelligence model having at least one second parameter – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Regarding claim 13, Claim 13 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 12 which includes an abstract idea (see rejection for claim 12). The additional limitations: wherein the at least one content includes a plurality of images, and wherein the specific event includes a change in the content within the plurality of images, or an identification of a movement of the electronic device. – This limitation merely recites a further limitation on the selecting first values from among a plurality of values associated with an computation capability to process the obtained at least one content from Claim 12 which was directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 14, Claim 14 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 12 which includes an abstract idea (see rejection for claim 12). The additional limitations: wherein the artificial intelligence model is a model pre-trained to output result data in response to receiving the at least one data obtained based on execution of the application, and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. wherein the at least one parameter of the artificial intelligence model includes at least one weight and at least one activation function obtained according to the training. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Regarding claim 15, Claim 15 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 14 which includes an abstract idea (see rejection for claim 14). The additional limitations: wherein the plurality of values associated with the computation capability include combinations of values each including value for the weight and a value for the activation function, and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. wherein the value for the weight and the value for the activation function are associated with the computation capability. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Regarding claim 16, Claim 16 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 15 which includes an abstract idea (see rejection for claim 15). The additional limitations: wherein the instructions cause the electronic device to: This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). when an event for processing the at least one content obtained based on the executed application occurs, select a first combination including a first value for the weight and a first value for the activation function, as the first values, from among the combinations of the values; and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) when the specific event occurs, select a second combination including a second value for the weight and a second value for the activation function, as the second values, from among the combinations of the values. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 17, Claim 17 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 16 which includes an abstract idea (see rejection for claim 16). The additional limitations: wherein the instructions cause the electronic device to: This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). obtain the first artificial intelligence model by setting the at least one weight of the artificial intelligence model as at least one first weight based on the first value for the weight and the at least one activation function of the artificial intelligence model as at least one first activation function based on the first value for the activation function; and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. obtain the second artificial intelligence model by setting the at least one weight of the artificial intelligence model as at least one second weight based on the second value for the weight and the at least one activation function of the artificial intelligence model as at least one second activation function based on the second value for the activation function. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Regarding claim 18, Claim 18 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 17 which includes an abstract idea (see rejection for claim 17). The additional limitations: wherein the instructions cause the electronic device to: This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). identify a first processor corresponding to the first value for the weight and the first value for the activation function among a plurality of processors of the electronic device and control the first processor to process the at least one content using the first artificial intelligence model; and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) identify a second processor corresponding to the second value for the weight and the second value for the activation function among the plurality of processors of the electronic device and control the second processor to process the at least one content using the second artificial intelligence model. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 19, Claim 19 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 16 which includes an abstract idea (see rejection for claim 16). The additional limitations: wherein the instructions cause the electronic device comprise: This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) calculate costs for some of the combinations of the values based on a value for the weight and a value for the activation function corresponding to each of some of the combinations of the values during a designated period based on the occurrence of the designated event, - This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) the costs indicating an accuracy of result data obtained as the at least one data is processed based on some of the combinations of the values and energy consumption obtained as the at least one data is processed based on some of the combinations of the values; and – This limitation merely recites a further limitation on the calculate costs for some of the combinations of the values based on a value for the weight and a value for the activation function included in each of some of the combinations of the values during a specific period based on the occurrence of the specific event which is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) select a second combination of the values having a lowest cost among the calculated costs. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 20: Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is a process. Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites an abstract idea. selecting a first processor to process the obtained at least one content using an artificial intelligence model stored in the electronic device, the first processor configured to correspond to first values among a plurality of values associated with an computation capability- This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) calculating respective costs output from the artificial intelligence model when the values associated with the computation capability is applied to the artificial intelligence model, the costs are calculated as a function of an accuracy of result data and energy consumption of the artificial intelligence model; This limitation is directed to mathematical calculation as it is calculating the costs as a function of an accuracy and energy consumption (see MPEP 2106.04(a)(2) l. C.) selecting a second processor based on an occurrence of a specific event relating to at least one of a change in the at least one content, or a change in a state of the electronic device, the second processor configured to correspond to second values among the plurality of values associated with the computation capability - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) wherein selecting the second processor further comprises selecting second values from among the plurality of values based on the calculation of the respective costs. This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? No, there are no additional elements that integrate the judicial exception into a practical application. The additional elements: executing, [by at least one processor of the electronic device], an application and obtaining at least one content based on the executed application; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. by at least one processor of the electronic device – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). controlling the first processor to process the at least one content, [using a first artificial intelligence model having at least one first parameter], obtained by configuring at least one parameter of an artificial intelligence model stored in the electronic device as the at least one first parameter corresponding to the first values; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a first artificial intelligence model having at least one first parameter– This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). controlling the second processor to process the at least one content, [using a second artificial intelligence model having at least one second parameter], obtained by configuring the at least one parameter of the artificial intelligence model as the at least one second parameter corresponding to the second values. - Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a second artificial intelligence model having at least one second parameter – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? No, there are no additional elements that amount to significantly more than the judicial exception. The additional elements are: executing, [by at least one processor of the electronic device], an application and obtaining at least one content based on the executed application; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. by at least one processor of the electronic device – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). controlling the first processor to process the at least one content, [using a first artificial intelligence model having at least one first parameter], obtained by configuring at least one parameter of an artificial intelligence model stored in the electronic device as the at least one first parameter corresponding to the first values; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a first artificial intelligence model having at least one first parameter– This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). controlling the second processor to process the at least one content, [using a second artificial intelligence model having at least one second parameter], obtained by configuring the at least one parameter of the artificial intelligence model as the at least one second parameter corresponding to the second values. - Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. using a second artificial intelligence model having at least one second parameter – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-6, 8-9 and 12-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 11,263,513 B2) in view of Palkar (US 11,568,251 B1) and further in view of Shi et al. (US 2017 /0270408 A1) Regarding Claim 1, Kim explicitly discloses: selecting first values from among a plurality of values associated with an computation capability to process the obtained at least one content; (Kim, Col. 2, Lines 23-28: “According to an embodiment of the present disclosure, a method for quantizing bits of an artificial neural network is provided. The method includes the steps of: (a) selecting at least one parameter among a plurality of parameters used in the artificial neural network;”, Col. 6, Lines 60-66: “In some embodiments, the bit quantization method and system of the present disclosure may reduce a size of a parameter used in an artificial neural network operation to a unit of bits. In general, data structures of 32-bit, 16-bit, or 8-bit units (for example, CPU, GPU, memory, cache, buffer, and the like), are used for computation of artificial neural networks.”) [Examiner’s note: “a plurality of values associated with an computation capability” is being interpreted as the unit of bits i.e., 32-bit, 16-bit or 8-bit units] obtaining first result data by processing the at least one content, using a first artificial intelligence model having at least one first parameter, obtained by configuring at least one parameter of an artificial intelligence model stored in the electronic device as the at least one first parameter corresponding to the first values; (Kim, Col. 12, Lines 57-66: “In the bit quantization process of the convolution layer described above, the weight kernel quantization 1028 may be performed using the following equation PNG media_image1.png 51 314 media_image1.png Greyscale . Where, aj is the weight value to be quantized, for example, the weight of a real number and each weight in the weight kernel, k represents the number of bits to quantize, aq represents the result of aj being quantized by k bits.”, Col. 13, Lines 1-4: “That is, according to the above formula, firstly, aj is multiplied by a predetermined binary number 2k so that aj is incremented by k bits, hereinafter referred to as "the first value".”) obtaining the second result data by processing the at least one content, using a second artificial intelligence model having at least one second parameter, obtained by configuring the at least one parameter of the artificial intelligence model as the at least one second parameter corresponding to the second values. (Kim, Col. 13, Lines 4-14: “Next, by performing a rounding or truncation operation on the first value, the number after the decimal point of, aj is removed, hereinafter referred to as "second value". The second value is divided by a binary number of 2k and the number of bits is reduced again by k bits, so that the element value of the final quantized weight kernel can be calculated. Such weight or weight kernel quantization 1028 is repeatedly executed for all element values of the weight or weight kernel 1014 to generate quantized weight values 1018.”) However, Kim fails to disclose: executing, by at least one processor of the electronic device, an application and obtaining at least one content based on the executed application; calculating respective costs output from the artificial intelligence model when the values associated with the computation capability is applied to the artificial intelligence model, the costs are calculated as a function of an accuracy of result data and energy consumption of the artificial intelligence model; detecting an occurrence of a specific event relating to at least one of a change in the at least one content, or a change in a state of the electronic device; in response to detecting the occurrence of the specific event, selecting second values from among the plurality of values based on the calculation of the respective costs However, Palkar explicitly discloses: executing, by at least one processor of the electronic device, an application and obtaining at least one content based on the executed application; (Palkar, Col. 4, Lines 25-29: “In an example, edge devices may comprise security camera applications. In an example, the security camera applications may include battery-powered cameras 70, doorbell cameras 72, outdoor cameras 74, and indoor cameras 76.”, Col. 4, Lines 37-42: “an edge device utilizing a quantized neural network generated in accordance with an embodiment of the invention may take massive amounts of image data and make on-device inferences to obtain useful information with reduced bandwidth and/or reduced power consumption.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kim and Palkar. Kim teaches method and system for bit quantization of artificial neural network. Palkar teaches dynamic quantization for models run on edge devices. One of ordinary skill would have motivation to combine Kim and Palkar because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art. However, Shi explicitly discloses: calculating respective costs output from the artificial intelligence model when the values associated with the computation capability is applied to the artificial intelligence model, the costs are calculated as a function of an accuracy of result data and energy consumption of the artificial intelligence model; (Shi, ¶[0031]: “One method to evaluate the quality of the output is to compute costs. Accuracy cost generator 42 generates an accuracy cost that is a measure of how close to the expected results the current cycle's output is.”, ¶[0032]: “Another cost is the hardware cost of neural network 36. In general, accuracy can be improved by increasing the amount of hardware available in neural network 36, so there is a tradeoff between hardware cost and accuracy cost. Typical hardware cost factors that can be computed by hardware complexity cost generator 44 include a weight decay function to prevent over-fitting when adjusting weight ranges, and a sparsity function to improve the structure and regularity.”, ¶[0063]: “Curves 152, 154 show accuracy vs. weight cost using bit-depth optimization engine.”, ¶[0066-0067]: “A typical neural network training involves minimizing a cost function J(ϴ): PNG media_image2.png 37 585 media_image2.png Greyscale ”) detecting an occurrence of a specific event relating to at least one of a change in the at least one content, or a change in a state of the electronic device; (Shi, ¶[0048]: “FIG. 6B shows spikes in the cost gradient that occur at steps when the number of binary bits changes. Gradient curve 144 is flat for long ranges of weight values between steps. A spike in gradient curve 144 occurs just to the right of each step in quantized weight cost curve 140. This spike indicates that the slope or gradient of quantized weight cost curve 140 changes dramatically when the number of binary bits drops.”) in response to detecting the occurrence of the specific event, selecting second values from among the plurality of values based on the calculation of the respective costs (Shi, ¶[0049]: “Hardware complexity cost generator 44 can generate the gradient of quantized weight cost curve 140 for each weight, and bit-depth optimization engine 48 can search for weights with high cost gradients and select these for reduction during training optimization. Over many cycles of selection, weights can be reduced and traded off with accuracy until an optimal choice of weight values is obtained for low-bit-depth neural network 40.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kim and Shi. Kim teaches method and system for bit quantization of artificial neural network. Shi teaches evaluating an AI model based on multiple cost metrics. One of ordinary skill would have motivation to combine Kim and Shi to improve the system by enabling it to more efficiently trade off model accuracy and computation burden, thereby selecting values that better satisfy performance constraints while reducing hardware complexity and energy consumption. Regarding Claim 2, the combination of Kim, Palkar and Shi discloses all the information of claim 1 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: wherein the at least one content includes a plurality of images, and wherein the specific event includes a change in the content within the plurality of images, or an identification of a movement of the electronic device. (Kim, Col. 9, Lines 25-29: “Here, the CONY 420 may serve as a kind of template for extracting features from high-dimensional input data, for example, images or videos. Specifically, one convolution may be repeatedly applied several times while changing a location for a portion of the input data 410 to extract features for the entire input data 410.”) Regarding Claim 3, the combination of Kim, Palkar and Shi discloses all the information of claim 1 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: wherein the artificial intelligence model is a model pre-trained to output result data in response to receiving the at least one data obtained based on execution of the application, and (Palkar, Col. 2, Lines 42-51: “The invention concerns a method of generating a quantized neural network comprising (i) receiving a pre-trained neural network model and (ii) modifying the pre-trained neural network model to calculate one or more statistics on an output of one or more layers of the pre-trained neural network model based on a current image and set up an output data format for one or more following layers of the pre-trained neural network model for one or more of the current image and a subsequent image dynamically based on the one or more statistics.”, Col. 4, Lines 25-26: “In an example, edge devices may comprise security camera applications.”) [Examiner’s note: “the at least one data obtained based on execution of the application” is being interpreted as the current image obtained from the security camera applications ] wherein the at least one parameter of the artificial intelligence model includes at least one weight and at least one activation function obtained according to the training. (Palkar, Col. 3, Lines 42- 48: “In various embodiments, a method provides an alternate way of doing post training quantization. Instead of selecting a data format based on a calibration dataset, an activation datapath may be dynamically adjusted based on statistics of a current data set (e.g., image, etc.) and/or a different quantized weight kernel may be selected dynamically based on the statistics of the current data set (e.g., image, etc.).”) Regarding Claim 4, the combination of Kim, Palkar and Shi discloses all the information of claim 3 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: wherein the plurality of values associated with the computation capability include combinations of values each including a value for the weight and a value for the activation function, and wherein the value for the weight and the value for the activation function are associated with the computation capability. (Kim, Col. 13, Lines 45-50: “Through the described weight or weight kernel quantization 1028 and the feature map or activation map quantization 1030, the memory size and the amount of computation required for a convolution operation of the convolutional layer 420 of the convolutional neural network can be reduced in a unit of bits.”) [Examiner’s note: “the computation capability” is being interpreted as the memory size] Regarding Claim 5, the combination of Kim, Palkar and Shi discloses all the information of claim 4 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: when an event for processing the at least one content obtained based on the executed application occurs, selecting a first combination including a first value for the weight and a first value for the activation function, as the first values, from among the combinations of the values; and (Kim, Col. 6, Lines 45 - 48: “In the present disclosure, "parameter" may mean one or more of an artificial neural network or weight data, feature map data, and activation map data of each layer configuring the artificial neural network.”, Col. 2, Lines 23-28: “According to an embodiment of the present disclosure, a method for quantizing bits of an artificial neural network is provided. The method includes the steps of: (a) selecting at least one parameter among a plurality of parameters used in the artificial neural network;”, Col. 5, Lines 57- 64 : “an accuracy determination module that determines whether the accuracy of the artificial neural network is greater than or equal to a predetermined target value, wherein when the accuracy of the artificial neural network is greater than or equal to the target value, the accuracy determination module controls the parameter selection module and the bit quantization module to perform bit quantization”) [Examiner’s note: Kim discloses selecting parameters for the quantization process, wherein the parameters here is defined as one or more weight data and activation map data (i.e., the combination of weight and activation function values)] when the specific event occurs, selecting a second combination including a second value for the weight and a second value for the activation function, as the second values, from among the combinations of the values. (Kim, Col. 6, Lines 45 - 48: “In the present disclosure, "parameter" may mean one or more of an artificial neural network or weight data, feature map data, and activation map data of each layer configuring the artificial neural network.”, Col. 2, Lines 23-41: “According to an embodiment of the present disclosure, a method for quantizing bits of an artificial neural network is provided. The method includes the steps of: (a) selecting at least one parameter among a plurality of parameters used in the artificial neural network; (b) a bit quantizing to reduce the size of data required for an operation on the selected parameter to a unit of bits… if the accuracy of the artificial neural network is less than the target value, the number of bits of the parameter is restored to the number of bits when the accuracy of the artificial neural network is greater than the target value, and then repeating the steps from (a) to (d).”, Col. 5, Lines 57- 64 : “an accuracy determination module that determines whether the accuracy of the artificial neural network is greater than or equal to a predetermined target value, wherein when the accuracy of the artificial neural network is greater than or equal to the target value, the accuracy determination module controls the parameter selection module and the bit quantization module to perform bit quantization) [Examiner’s note: Kim discloses selecting parameters for the quantization process, wherein the parameters here is defined as one or more weight data and activation map data (i.e., the combination of weight and activation function values)] Regarding Claim 6, the combination of Kim, Palkar and Shi discloses all the information of claim 5 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: obtaining the first artificial intelligence model by setting the at least one weight of the artificial intelligence model as at least one first weight based on the first value for the weight and the at least one activation function of the artificial intelligence model as at least one first activation function based on the first value for the activation function; and (Kim, Col. 12, Lines 57-66: “In the bit quantization process of the convolution layer described above, the weight kernel quantization 1028 may be performed using the following equation PNG media_image1.png 51 314 media_image1.png Greyscale . Where, aj is the weight value to be quantized, for example, the weight of a real number and each weight in the weight kernel, k represents the number of bits to quantize, aq represents the result of aj being quantized by k bits.”, Col. 13, Lines 1-4: “That is, according to the above formula, firstly, aj is multiplied by a predetermined binary number 2k so that aj is incremented by k bits, hereinafter referred to as "the first value".”, Col. 13, Lines 14-34: “Meanwhile, the feature map or activation map quantization 1030 may be performed by the following equation PNG media_image3.png 93 324 media_image3.png Greyscale . In the feature map or activation map quantization 1030, the same formula as the weight or weight kernel quantization 1028 method may be used. However, in feature map or activation map quantization, a process of normalizing each element value of the feature map or the activation map 1022 to a value between 0 and 1 can be added by applying clipping before quantization is applied for each element value aj for example, a real number, of the feature map or activation map. Next, the normalized aj is multiplied by a predetermined binary number 2k so that aj is incremented by k bits, hereinafter referred to as "the first value".”, Col. 27, Lines 51-59: “For example, after performing the first bit quantization, if the accuracy of the artificial neural network is measured to be 94%, then additional bit quantization can be performed. After performing the second bit quantization, if the accuracy of the artificial neural network is measured to be 88%, then the result of the currently executed bit quantization may be ignored and the number of data representation in bits determined by the first bit quantization can be determined as the final bit quantization result.”) obtaining the second artificial intelligence model by setting the at least one weight of the artificial intelligence model as at least one second weight based on the second value for the weight and the at least one activation function of the artificial intelligence model as at least one second activation function based on the second value for the activation function. (Kim, Col. 13, Lines 4-14: “Next, by performing a rounding or truncation operation on the first value, the number after the decimal point of, aj is removed, hereinafter referred to as "second value". The second value is divided by a binary number of 2k and the number of bits is reduced again by k bits, so that the element value of the final quantized weight kernel can be calculated. Such weight or weight kernel quantization 1028 is repeatedly executed for all element values of the weight or weight kernel 1014 to generate quantized weight values 1018.”, Col. 13, Lines 34- 41 “Next, by performing a rounding or truncation operation on the first value, the number after the decimal point of, aj is removed, hereinafter referred to as "second value". The second value is divided by a binary number of 2k and the number of bits is reduced again by k bits, so that the element values of the final quantized feature map or activation map 1026 may be calculated.”, Col. 27, Lines 51-59: “For example, after performing the first bit quantization, if the accuracy of the artificial neural network is measured to be 94%, then additional bit quantization can be performed. After performing the second bit quantization, if the accuracy of the artificial neural network is measured to be 88%, then the result of the currently executed bit quantization may be ignored and the number of data representation in bits determined by the first bit quantization can be determined as the final bit quantization result.”) Regarding Claim 8, the combination of Kim, Palkar and Shi discloses all the information of claim 5 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: wherein calculating the respective costs output from the artificial intelligence model comprises: calculating costs for some of the combinations of the values based on a value for the weight and a value for the activation function included in each of some of the combinations of the values during a specific period based on the occurrence of the specific event, (Palkar, Col. 10, Lines 1-12: “In various embodiments, the framework in accordance with an embodiment of the invention may add statistics calculating nodes (e.g., min, max, variance, histogram, etc.) 122 between the output of one layer (e.g., the activation function layer 114) and the output of a following layer ( e.g., the second convolution layer 116). In one example, the statistics operations 122 may comprise computing minimum, maximum, variance, and/or histogram values using one or more feature outputs of the RELU operator 114 for dynamically adjusting the quantization (e.g., data format, weight kernels, etc.) of the outputs of the following convolution layer 116.”) the costs indicating an accuracy of result data obtained when the at least one data is processed based on some of the combinations of the values, and energy consumption obtained when the at least one data is processed based on some of the combinations of the values; and (Palkar, Col. 4, Lines 66- 67: “the system 80 generally comprises hardware circuitry that is optimized to provide a high performance image processing and computer vision pipeline in minimal area and with minimal power consumption.”, Col. 7, Lines 13-16: “The hardware engines 92a-92n may be implemented to include dedicated hardware circuits that are optimized for high-performance and low power consumption while performing the specific processing tasks.”) selecting the second combination of the values having a lowest cost among the calculated costs. (Kim, Col. 27, Lines 51-59: “after performing the first bit quantization, if the accuracy of the artificial neural network is measured to be 94%, then additional bit quantization can be performed. After performing the second bit quantization, if the accuracy of the artificial neural network is measured to be 88%, then the result of the currently executed bit quantization may be ignored and the number of data representation in bits determined by the first bit quantization can be determined as the final bit quantization result.”) Regarding Claim 9, the combination of Kim, Palkar and Shi discloses all the information of claim 8 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: maintaining the processing of the at least one content using the first artificial intelligence model when the lowest cost among the calculated costs is a threshold or more. (Kim, Col. 27, Lines 43-51: “In addition, the target value used in the bit quantization method described above may be expressed with a minimum accuracy to be maintained after bit quantization of the artificial neural network. For example, assuming that the threshold is 90%, additional bit quantization can be performed if the accuracy of the artificial neural network is 90% or more even after reducing the memory size for storing the parameters of the layer selected by bit quantization to a unit of bits.”) Regarding Claim 12, Kim explicitly discloses: An electronic device, comprising: memory storing instructions; and (Kim, Col. 28, Lines 42-47: “Computer storage medium includes both volatile and nonvolatile, removable and non-removable medium implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.”) at least one processor (Kim, Col. 28, Lines 20-23: “However, 'elements' are not meant to be limited to software or hardware, and each element may be configured to be in an addressable storage medium or may be configured to play one or more processors.”) select first values from among a plurality of values associated with an computation capability to process the obtained at least one content; (Kim, Col. 2, Lines 23-28: “According to an embodiment of the present disclosure, a method for quantizing bits of an artificial neural network is provided. The method includes the steps of: (a) selecting at least one parameter among a plurality of parameters used in the artificial neural network;”, Col. 6, Lines 60-66: “In some embodiments, the bit quantization method and system of the present disclosure may reduce a size of a parameter used in an artificial neural network operation to a unit of bits. In general, data structures of 32-bit, 16-bit, or 8-bit units (for example, CPU, GPU, memory, cache, buffer, and the like), are used for computation of artificial neural networks.”) [Examiner’s note: “a plurality of values associated with an computation capability” is being interpreted as the unit of bits i.e., 32-bit, 16-bit or 8-bit units] obtain first result data by processing the at least one content, using a first artificial intelligence model having at least one first parameter, obtained by configuring at least one parameter of an artificial intelligence model stored in the electronic device as the at least one first parameter corresponding to the first values; (Kim, Col. 12, Lines 57-66: “In the bit quantization process of the convolution layer described above, the weight kernel quantization 1028 may be performed using the following equation PNG media_image1.png 51 314 media_image1.png Greyscale . Where, aj is the weight value to be quantized, for example, the weight of a real number and each weight in the weight kernel, k represents the number of bits to quantize, aq represents the result of aj being quantized by k bits.”, Col. 13, Lines 1-4: “That is, according to the above formula, firstly, aj is multiplied by a predetermined binary number 2k so that aj is incremented by k bits, hereinafter referred to as "the first value".”) obtain the second result data by processing the at least one content, using a second artificial intelligence model having at least one second parameter, obtained by configuring the at least one parameter of the artificial intelligence model as the at least one second parameter corresponding to the second values. (Kim, Col. 13, Lines 4-14: “Next, by performing a rounding or truncation operation on the first value, the number after the decimal point of, aj is removed, hereinafter referred to as "second value". The second value is divided by a binary number of 2k and the number of bits is reduced again by k bits, so that the element value of the final quantized weight kernel can be calculated. Such weight or weight kernel quantization 1028 is repeatedly executed for all element values of the weight or weight kernel 1014 to generate quantized weight values 1018.”) Kim fails to disclose: execute an application and obtaining at least one content based on the executed application; calculating respective costs output from the artificial intelligence model when the values associated with the computation capability is applied to the artificial intelligence model, the costs are calculated as a function of an accuracy of result data and energy consumption of the artificial intelligence model; detecting an occurrence of a specific event relating to at least one of a change in the at least one content, or a change in a state of the electronic device; in response to detecting the occurrence of the specific event, selecting second values from among the plurality of values based on the calculation of the respective costs However, Palkar explicitly discloses: execute an application and obtaining at least one content based on the executed application; (Palkar, Col. 4, Lines 25-29: “In an example, edge devices may comprise security camera applications. In an example, the security camera applications may include battery-powered cameras 70, doorbell cameras 72, outdoor cameras 74, and indoor cameras 76.”, Col. 4, Lines 37-42: “an edge device utilizing a quantized neural network generated in accordance with an embodiment of the invention may take massive amounts of image data and make on-device inferences to obtain useful information with reduced bandwidth and/or reduced power consumption.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kim and Palkar. Kim teaches method and system for bit quantization of artificial neural network. Palkar teaches dynamic quantization for models run on edge devices. One of ordinary skill would have motivation to combine Kim and Palkar because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art. However, Shi explicitly discloses: calculating respective costs output from the artificial intelligence model when the values associated with the computation capability is applied to the artificial intelligence model, the costs are calculated as a function of an accuracy of result data and energy consumption of the artificial intelligence model; (Shi, ¶[0031]: “One method to evaluate the quality of the output is to compute costs. Accuracy cost generator 42 generates an accuracy cost that is a measure of how close to the expected results the current cycle's output is.”, ¶[0032]: “Another cost is the hardware cost of neural network 36. In general, accuracy can be improved by increasing the amount of hardware available in neural network 36, so there is a tradeoff between hardware cost and accuracy cost. Typical hardware cost factors that can be computed by hardware complexity cost generator 44 include a weight decay function to prevent over-fitting when adjusting weight ranges, and a sparsity function to improve the structure and regularity.”, ¶[0063]: “Curves 152, 154 show accuracy vs. weight cost using bit-depth optimization engine.”, ¶[0066-0067]: “A typical neural network training involves minimizing a cost function J(ϴ): PNG media_image2.png 37 585 media_image2.png Greyscale ”) detecting an occurrence of a specific event relating to at least one of a change in the at least one content, or a change in a state of the electronic device; (Shi, ¶[0048]: “FIG. 6B shows spikes in the cost gradient that occur at steps when the number of binary bits changes. Gradient curve 144 is flat for long ranges of weight values between steps. A spike in gradient curve 144 occurs just to the right of each step in quantized weight cost curve 140. This spike indicates that the slope or gradient of quantized weight cost curve 140 changes dramatically when the number of binary bits drops.”) in response to detecting the occurrence of the specific event, selecting second values from among the plurality of values based on the calculation of the respective costs (Shi, ¶[0049]: “Hardware complexity cost generator 44 can generate the gradient of quantized weight cost curve 140 for each weight, and bit-depth optimization engine 48 can search for weights with high cost gradients and select these for reduction during training optimization. Over many cycles of selection, weights can be reduced and traded off with accuracy until an optimal choice of weight values is obtained for low-bit-depth neural network 40.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kim and Shi. Kim teaches method and system for bit quantization of artificial neural network. Shi teaches evaluating an AI model based on multiple cost metrics. One of ordinary skill would have motivation to combine Kim and Shi to improve the system by enabling it to more efficiently trade off model accuracy and computation burden, thereby selecting values that better satisfy performance constraints while reducing hardware complexity and energy consumption. Regarding Claim 13, the combination of Kim, Palkar and Shi discloses all the information of claim 12 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: wherein the at least one content includes a plurality of images, and wherein the specific event includes a change in the content within the plurality of images, or an identification of a movement of the electronic device. (Kim, Col. 9, Lines 25-29: “Here, the CONY 420 may serve as a kind of template for extracting features from high-dimensional input data, for example, images or videos. Specifically, one convolution may be repeatedly applied several times while changing a location for a portion of the input data 410 to extract features for the entire input data 410.”) Regarding Claim 14, the combination of Kim, Palkar and Shi discloses all the information of claim 12 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: wherein the artificial intelligence model is a model pre-trained to output result data in response to receiving the at least one data obtained based on execution of the application, and (Palkar, Col. 2, Lines 42-51: “The invention concerns a method of generating a quantized neural network comprising (i) receiving a pre-trained neural network model and (ii) modifying the pre-trained neural network model to calculate one or more statistics on an output of one or more layers of the pre-trained neural network model based on a current image and set up an output data format for one or more following layers of the pre-trained neural network model for one or more of the current image and a subsequent image dynamically based on the one or more statistics.”, Col. 4, Lines 25-26: “In an example, edge devices may comprise security camera applications.”) [Examiner’s note: “the at least one data obtained based on execution of the application” is being interpreted as the current image obtained from the security camera applications ] wherein the at least one parameter of the artificial intelligence model includes at least one weight and at least one activation function obtained according to the training. (Palkar, Col. 3, Lines 42- 48: “In various embodiments, a method provides an alternate way of doing post training quantization. Instead of selecting a data format based on a calibration dataset, an activation datapath may be dynamically adjusted based on statistics of a current data set (e.g., image, etc.) and/or a different quantized weight kernel may be selected dynamically based on the statistics of the current data set (e.g., image, etc.).”) Regarding Claim 15, the combination of Kim, Palkar and Shi discloses all the information of claim 14 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: wherein the plurality of values associated with the computation capability include combinations of values each including value for the weight and a value for the activation function, and wherein the value for the weight and the value for the activation function are associated with the computation capability. (Kim, Col. 13, Lines 45-50: “Through the described weight or weight kernel quantization 1028 and the feature map or activation map quantization 1030, the memory size and the amount of computation required for a convolution operation of the convolutional layer 420 of the convolutional neural network can be reduced in a unit of bits.”) [Examiner’s note: “the computation capability” is being interpreted as the memory size] Regarding Claim 16, the combination of Kim, Palkar and Shi discloses all the information of claim 15 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: wherein the instructions cause the electronic device to: when an event for processing the at least one content obtained based on the executed application occurs, select a first combination including a first value for the weight and a first value for the activation function, as the first values, from among the combinations of the values; and (Kim, Col. 6, Lines 45 - 48: “In the present disclosure, "parameter" may mean one or more of an artificial neural network or weight data, feature map data, and activation map data of each layer configuring the artificial neural network.”, Col. 2, Lines 23-28: “According to an embodiment of the present disclosure, a method for quantizing bits of an artificial neural network is provided. The method includes the steps of: (a) selecting at least one parameter among a plurality of parameters used in the artificial neural network;”, Col. 5, Lines 57- 64 : “an accuracy determination module that determines whether the accuracy of the artificial neural network is greater than or equal to a predetermined target value, wherein when the accuracy of the artificial neural network is greater than or equal to the target value, the accuracy determination module controls the parameter selection module and the bit quantization module to perform bit quantization”) [Examiner’s note: Kim discloses selecting parameters for the quantization process, wherein the parameters here is defined as one or more weight data and activation map data (i.e., the combination of weight and activation function values)] when the specific event occurs, select a second combination including a second value for the weight and a second value for the activation function, as the second values, from among the combinations of the values. (Kim, Col. 6, Lines 45 - 48: “In the present disclosure, "parameter" may mean one or more of an artificial neural network or weight data, feature map data, and activation map data of each layer configuring the artificial neural network.”, Col. 2, Lines 23-41: “According to an embodiment of the present disclosure, a method for quantizing bits of an artificial neural network is provided. The method includes the steps of: (a) selecting at least one parameter among a plurality of parameters used in the artificial neural network; (b) a bit quantizing to reduce the size of data required for an operation on the selected parameter to a unit of bits… if the accuracy of the artificial neural network is less than the target value, the number of bits of the parameter is restored to the number of bits when the accuracy of the artificial neural network is greater than the target value, and then repeating the steps from (a) to (d).”, Col. 5, Lines 57- 64 : “an accuracy determination module that determines whether the accuracy of the artificial neural network is greater than or equal to a predetermined target value, wherein when the accuracy of the artificial neural network is greater than or equal to the target value, the accuracy determination module controls the parameter selection module and the bit quantization module to perform bit quantization) [Examiner’s note: Kim discloses selecting parameters for the quantization process, wherein the parameters here is defined as one or more weight data and activation map data (i.e., the combination of weight and activation function values)] Regarding Claim 17, the combination of Kim, Palkar and Shi discloses all the information of claim 16 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: wherein the instructions cause the electronic device to: obtain the first artificial intelligence model by setting the at least one weight of the artificial intelligence model as at least one first weight based on the first value for the weight and the at least one activation function of the artificial intelligence model as at least one first activation function based on the first value for the activation function; and (Kim, Col. 12, Lines 57-66: “In the bit quantization process of the convolution layer described above, the weight kernel quantization 1028 may be performed using the following equation PNG media_image1.png 51 314 media_image1.png Greyscale . Where, aj is the weight value to be quantized, for example, the weight of a real number and each weight in the weight kernel, k represents the number of bits to quantize, aq represents the result of aj being quantized by k bits.”, Col. 13, Lines 1-4: “That is, according to the above formula, firstly, aj is multiplied by a predetermined binary number 2k so that aj is incremented by k bits, hereinafter referred to as "the first value".”, Col. 13, Lines 14-34: “Meanwhile, the feature map or activation map quantization 1030 may be performed by the following equation PNG media_image3.png 93 324 media_image3.png Greyscale . In the feature map or activation map quantization 1030, the same formula as the weight or weight kernel quantization 1028 method may be used. However, in feature map or activation map quantization, a process of normalizing each element value of the feature map or the activation map 1022 to a value between 0 and 1 can be added by applying clipping before quantization is applied for each element value aj for example, a real number, of the feature map or activation map. Next, the normalized aj is multiplied by a predetermined binary number 2k so that aj is incremented by k bits, hereinafter referred to as "the first value".”, Col. 27, Lines 51-59: “For example, after performing the first bit quantization, if the accuracy of the artificial neural network is measured to be 94%, then additional bit quantization can be performed. After performing the second bit quantization, if the accuracy of the artificial neural network is measured to be 88%, then the result of the currently executed bit quantization may be ignored and the number of data representation in bits determined by the first bit quantization can be determined as the final bit quantization result.”) obtain the second artificial intelligence model by setting the at least one weight of the artificial intelligence model as at least one second weight based on the second value for the weight and the at least one activation function of the artificial intelligence model as at least one second activation function based on the second value for the activation function. (Kim, Col. 13, Lines 4-14: “Next, by performing a rounding or truncation operation on the first value, the number after the decimal point of, aj is removed, hereinafter referred to as "second value". The second value is divided by a binary number of 2k and the number of bits is reduced again by k bits, so that the element value of the final quantized weight kernel can be calculated. Such weight or weight kernel quantization 1028 is repeatedly executed for all element values of the weight or weight kernel 1014 to generate quantized weight values 1018.”, Col. 13, Lines 34- 41 “Next, by performing a rounding or truncation operation on the first value, the number after the decimal point of, aj is removed, hereinafter referred to as "second value". The second value is divided by a binary number of 2k and the number of bits is reduced again by k bits, so that the element values of the final quantized feature map or activation map 1026 may be calculated.”, Col. 27, Lines 51-59: “For example, after performing the first bit quantization, if the accuracy of the artificial neural network is measured to be 94%, then additional bit quantization can be performed. After performing the second bit quantization, if the accuracy of the artificial neural network is measured to be 88%, then the result of the currently executed bit quantization may be ignored and the number of data representation in bits determined by the first bit quantization can be determined as the final bit quantization result.”) Regarding Claim 18, the combination of Kim, Palkar and Shi discloses all the information of claim 17 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: wherein the instructions cause the electronic device to: identify a first processor corresponding to the first value for the weight and the first value for the activation function among a plurality of processors of the electronic device and control the first processor to process the at least one content using the first artificial intelligence model; and (Kim, Col. 23, Lines 64-67: “At this time, kernel element values (w 1, w 2, ... , w 9) of the weight kernel cache 2010 and a portion of the first channel of input data (x 1, x 2, ... , x 9) stored in the input activation map weight cache 2020 are input to the first convolution processing unit 2032.”, Col. 25, Lines ) identify a second processor corresponding to the second value for the weight and the second value for the activation function among the plurality of processors of the electronic device and control the second processor to process the at least one content using the second artificial intelligence model. (Kim, Col. 24, Lines 1-5: “weight kernel element values (w 10, w 11, ... w 18) of the weight kernel cache 2010 and a portion of the second channel of input data (x 10, x 11, ... x 18) stored in the input activation map cache 2020 are input to the second convolution processing unit 2034.”) Regarding Claim 19, the combination of Kim, Palkar and Shi discloses all the information of claim 16 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: wherein the instructions cause the electronic device comprises calculate costs for some of the combinations of the values based on a value for the weight and a value for the activation function corresponding to each of some of the combinations of the values during a designated period based on the occurrence of the designated event, (Palkar, Col. 10, Lines 1-12: “In various embodiments, the framework in accordance with an embodiment of the invention may add statistics calculating nodes (e.g., min, max, variance, histogram, etc.) 122 between the output of one layer (e.g., the activation function layer 114) and the output of a following layer ( e.g., the second convolution layer 116). In one example, the statistics operations 122 may comprise computing minimum, maximum, variance, and/or histogram values using one or more feature outputs of the RELU operator 114 for dynamically adjusting the quantization (e.g., data format, weight kernels, etc.) of the outputs of the following convolution layer 116.”) the costs indicating an accuracy of result data obtained as the at least one data is processed based on some of the combinations of the values and energy consumption obtained as the at least one data is processed based on some of the combinations of the values; and (Palkar, Col. 4, Lines 66- 67: “the system 80 generally comprises hardware circuitry that is optimized to provide a high performance image processing and computer vision pipeline in minimal area and with minimal power consumption.”, Col. 7, Lines 13-16: “The hardware engines 92a-92n may be implemented to include dedicated hardware circuits that are optimized for high-performance and low power consumption while performing the specific processing tasks.”) select a second combination of the values having a lowest cost among the calculated costs. (Kim, Col. 27, Lines 51-59: “after performing the first bit quantization, if the accuracy of the artificial neural network is measured to be 94%, then additional bit quantization can be performed. After performing the second bit quantization, if the accuracy of the artificial neural network is measured to be 88%, then the result of the currently executed bit quantization may be ignored and the number of data representation in bits determined by the first bit quantization can be determined as the final bit quantization result.”) Claim(s) 7, 10-11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 11,263,513 B2) in view of Palkar (US 11,568,251 B1) , Liu et al (US 11,675,676 B2) and further in view of Shi et al. (US 2017 /0270408 A1) Regarding claim 7, the combination of Kim, Palkar and Shi discloses all the information of Claim 6 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: identifying a first processor corresponding to the first value for the weight and the first value for the activation function among a plurality of processors of the electronic device and (Kim, Col. 23, Lines 64-67: “At this time, kernel element values (w 1, w 2, ... , w 9) of the weight kernel cache 2010 and a portion of the first channel of input data (x 1, x 2, ... , x 9) stored in the input activation map weight cache 2020 are input to the first convolution processing unit 2032.”) identifying a second processor corresponding to the second value for the weight and the second value for the activation function among the plurality of processors of the electronic device and (Kim, Col. 24, Lines 1-5: “weight kernel element values (w 10, w 11, ... w 18) of the weight kernel cache 2010 and a portion of the second channel of input data (x 10, x 11, ... x 18) stored in the input activation map cache 2020 are input to the second convolution processing unit 2034.”) Kim in view of Palkar and Shi fails to disclose: controlling the first processor to process the at least one content using the first artificial intelligence model; and controlling the second processor to process the at least one content using the second artificial intelligence model. However, Liu explicitly discloses: controlling the first processor to process the at least one content using the first artificial intelligence model; and (Liu, Col. 38, Lines 37-38: “the control device is configured to monitor a state of the artificial intelligence chip.”, Col. 39, Lines 22-24: “The control device is electrically connected with the artificial intelligence chip. The control device is configured to monitor the state of the artificial intelligence chip.”, Col. 39, Lines 33-36: “The control device may be capable of regulating the working states of the multiple processing chips, the multiple processing chips, and the multiple processing circuits in the artificial intelligence chip.”, Col. 32, Lines 9-16: “In the technical scheme, the operating system of a general purpose processor (such as CPU) generates an instruction based on the present technical scheme, and then sends the generated instruction to an artificial intelligence processor chip (such as GPU). The artificial intelligence processor chip performs an instruction operation to determine a neural network quantization parameter and perform quantization.”) [Examiner’s note: “the first processor” is being interpreted as the first processing chip among the multiple processing chips] controlling the second processor to process the at least one content using the second artificial intelligence model. (Liu, Col. 38, Lines 37-38: “the control device is configured to monitor a state of the artificial intelligence chip.”, Col. 39, Lines 22-24: “The control device is electrically connected with the artificial intelligence chip. The control device is configured to monitor the state of the artificial intelligence chip.”, Col. 39, Lines 33-36: “The control device may be capable of regulating the working states of the multiple processing chips, the multiple processing chips, and the multiple processing circuits in the artificial intelligence chip.”, Col. 32, Lines 9-16: “In the technical scheme, the operating system of a general purpose processor (such as CPU) generates an instruction based on the present technical scheme, and then sends the generated instruction to an artificial intelligence processor chip (such as GPU). The artificial intelligence processor chip performs an instruction operation to determine a neural network quantization parameter and perform quantization.”) [Examiner’s note: “the second processor” is being interpreted as the second processing chip among the multiple processing chips] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kim, Plakar, Shi and Liu. Kim teaches method and system for bit quantization of artificial neural network. Palkar teaches dynamic quantization for models run on edge devices. Shi teaches evaluating an AI model based on multiple cost metrics. Liu teaches neural network quantization parameter determination method. One of ordinary skill would have motivation to combine Kim, Palkar, Shi and Liu to quantify the trade-off between accuracy and power consumption, enabling smarter, context-aware decisions for optimal processing in performance and power sensitive system. Regarding claim 10, the combination of Kim, Palkar and Shi discloses all the information of Claim 8 (as shown in the rejection above). Kim in view of Palkar and Shi further discloses: identifying a third combination including a highest value for the weight and a highest value for the activation function among the combinations of the values; (Kim, Col. 17, Lines 26-29: “As illustrated, the bit quantization method 1200 of an artificial neural network may be started with selecting a layer with the highest computational amount among a plurality of layers included in the artificial neural network”, Col. 17, Lines 48-51: “That is, by proceeding to step S1210, the computational amount is calculated again for all layers in the artificial neural network, and based on this, the layer with the highest computational amount is selected again.”, Col. 12, Lines 57-64: “In the bit quantization process of the convolution layer described above, the weight kernel quantization 1028 may be performed using the following equation PNG media_image1.png 51 314 media_image1.png Greyscale , Col. 13, Lines 1-4: “That is, according to the above formula, firstly, aj is multiplied by a predetermined binary number 2k so that aj is incremented by k bits, hereinafter referred to as "the first value".”, Col. 13, Lines 14-34: “Meanwhile, the feature map or activation map quantization 1030 may be performed by the following equation PNG media_image3.png 93 324 media_image3.png Greyscale .) [Examiner’s note: Kim discloses how to determine the highest computational amount among multiple computational amount, “the highest computational amount” here is being interpreted as the “third combination value” because it describes the combination of highest weight value and highest activation value in the quantization process] obtaining a third artificial intelligence model having at least one third parameter configured based on the highest value for the weight and the highest value for the activation function and (Kim, Col. 2, Lines 23-35: “According to an embodiment of the present disclosure, a method for quantizing bits of an artificial neural network is provided. The method includes the steps of: (a) selecting at least one parameter among a plurality of parameters used in the artificial neural network; (b) a bit quantizing to reduce the size of data required for an operation on the selected parameter to a unit of bits; (c) determining whether the accuracy of the artificial neural network is greater than or equal to a predetermined target value; ( d) if the accuracy of the artificial neural network is greater than or equal to the target value, steps (b) to ( c) are repeatedly executed for the parameter to further reduce the number of bits in the data representation of the parameter.”, Col. 17, Lines 26-38: “As illustrated, the bit quantization method 1200 of an artificial neural network may be started with selecting a layer with the highest computational amount among a plurality of layers included in the artificial neural network. When the layer selection of the artificial neural network is completed in step S1210, the operation may proceed to step of reducing the size of the data representation for the parameter of the selected layer to a unit of bits S1220. In an embodiment, when the size of the data of the selected layer is reduced to a unit of bits, the weight kernel quantization 1028 and the activation map quantization 1024 described with reference to FIGS. 4 to 10 may be performed.”) [Examiner’s note: “the third artificial intelligent model” is being interpreted as the artificial neural network in the third iteration] obtaining artificial intelligence models having at least one fourth parameter configured based on values for the weight and values for the activation function included in the some of the combinations of the values; (Kim, Col. 2, Lines 23-35: “According to an embodiment of the present disclosure, a method for quantizing bits of an artificial neural network is provided. The method includes the steps of: (a) selecting at least one parameter among a plurality of parameters used in the artificial neural network; (b) a bit quantizing to reduce the size of data required for an operation on the selected parameter to a unit of bits; (c) determining whether the accuracy of the artificial neural network is greater than or equal to a predetermined target value; ( d) if the accuracy of the artificial neural network is greater than or equal to the target value, steps (b) to ( c) are repeatedly executed for the parameter to further reduce the number of bits in the data representation of the parameter.”) [Examiner’s note: “one fourth parameter” is being interpreted as the selected parameter in the fourth iteration] Kim in view of Palkar and Shi fails to disclose: obtaining third result data by processing the at least one data based on the third artificial intelligence model during the designated period, and obtaining a plurality of result data by processing the at least one data based on the artificial intelligence models; and calculating a difference between at least, respective parts of the plurality of result data and at least part of the third result data. However, Liu explicitly discloses: obtaining third result data by processing the at least one data based on the third artificial intelligence model during the designated period, and (Liu, Col. 7, Lines 39-55: “In order to obtain a neural network with expected precision, a large sample data set is needed in the training process, but it is impossible to input the entire sample data set into a computer at once. Therefore, in order to solve the problem, the sample data set needs to be divided into multiple blocks and then each block of the sample data set is passed to the computer. After the forward processing is performed on each block of the sample data set, the weights in the neural network are correspondingly updated once. When the neural network performs a forward processing on a complete sample data set and returns a weight update correspondingly, the process is called an epoch. In practice, it is not enough to perform forward processing on a complete data set in the neural network only once. It is necessary to transmit the complete data set in the same neural network multiple times, which means that multiple epochs are needed to obtain a neural network with expected precision.”, Col. 24, Lines 17-21: “Taking weight as an example, it can be seen from the curve of data variation range shown in FIG. Sa that during the iteration interval period from the beginning of training to the T-th iteration, the data variation range is large in each weight update.”) [Examiner’s note: “the third result data” is being interpreted as the “expected precision data”, “the third artificial intelligence model” is being interpreted as the “neural network” in the third epoch, “the designated period” is being interpreted as the “iteration interval period from the beginning of training to the T-th iteration”] obtaining a plurality of result data by processing the at least one data based on the artificial intelligence models; and (Liu, Col. 6, Lines 39 : “The purpose of transmitting each sample image to the neural network is to obtain a recognition result through the neural network. In order to calculate the loss function, each sample image in the sample data set must be traversed to obtain the actual result y corresponding to each sample image, and then calculate the loss function according to the above definition.”) [Examiner’s note: “a plurality of result data” is being interpreted as “the actual result y” of each image samples] calculating a difference between at least, respective parts of the plurality of result data and at least part of the third result data. (Liu, Col. 6, Lines 11-18: “the loss function may be obtained as follows: transmitting each sample data along the neural network in the process of training a certain neural network to obtain an output value, performing subtraction on the output value and an expected value to obtain a difference, and then squaring the difference. The loss function obtained in the manner is the difference between the expected value and the true value.”, Col. 6m Lines 20-34: “In some examples, the loss function can be represented as: PNG media_image4.png 76 189 media_image4.png Greyscale . In the formula, y represents an expected value, y represents an actual result obtained by each sample data in a sample data set transmitting through the neural network, i represents an index of each sample data in the sample data set, L(y,y) represents the difference between the expected value y and the actual result y, and m represents the number of sample data in the sample data set.”) [Examiner’s note: “the third result data” is being interpreted as the “expected result value y”] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kim, Plakar, Shi and Liu. Kim teaches method and system for bit quantization of artificial neural network. Palkar teaches dynamic quantization for models run on edge devices. Shi teaches evaluating an AI model based on multiple cost metrics. Liu teaches neural network quantization parameter determination method. One of ordinary skill would have motivation to combine Kim, Palkar, Shi and Liu to quantify the trade-off between accuracy and power consumption, enabling smarter, context-aware decisions for optimal processing in performance and power sensitive system. Regarding claim 11, the combination of Kim, Liu, Shi and Palkar discloses all the information of Claim 10 (as shown in the rejection above). Kim in view of Palkar, Shi and Liu further discloses: obtaining information associated with an amount of energy consumed when the at least one data is processed based on each of the plurality of artificial intelligence models; and (Liu, Col. 8, Lines 22-29: “Moreover, the floating-point computation unit needs to consume more resources to process, so that a gap of power consumption between the fixed-point computation unit and the floating-point computation unit is usually an order of magnitude. The floating point computation unit occupies many times more chip area and consumes many times more power than the fixed-point computation unit”) [Examiner’s note: The gap of power consumption represents information about energy consumption during data processing] calculating the costs based on the calculated difference and the consumed amount of energy. (Liu, Col. 6, Lines 11-18: “the loss function may be obtained as follows: transmitting each sample data along the neural network in the process of training a certain neural network to obtain an output value, performing subtraction on the output value and an expected value to obtain a difference, and then squaring the difference. The loss function obtained in the manner is the difference between the expected value and the true value.”) Regarding claim 20, Kim explicitly discloses: selecting a first processor to process the obtained at least one content using an artificial intelligence model stored in the electronic device, the first processor configured to correspond to first values among a plurality of values associated with an computation capability; (Kim, Col. 23, Lines 64-67: “At this time, kernel element values (w 1, w 2, ... , w 9) of the weight kernel cache 2010 and a portion of the first channel of input data (x 1, x 2, ... , x 9) stored in the input activation map weight cache 2020 are input to the first convolution processing unit 2032.”) the second processor configured to correspond to second values among the plurality of values associated with the computation capability; and (Kim, Col. 24, Lines 1-5: “weight kernel element values (w 10, w 11, ... w 18) of the weight kernel cache 2010 and a portion of the second channel of input data (x 10, x 11, ... x 18) stored in the input activation map cache 2020 are input to the second convolution processing unit 2034.”) Kim fails to disclose: executing, by at least one processor of the electronic device, an application and obtaining at least one content based on the executed application; controlling the first processor to process the at least one content, using a first artificial intelligence model having at least one first parameter, obtained by configuring at least one parameter of an artificial intelligence model stored in the electronic device as the at least one first parameter corresponding to the first values; calculating respective costs output from the artificial intelligence model when the values associated with the computation capability is applied to the artificial intelligence model, the costs are calculated as a function of an accuracy of result data and energy consumption of the artificial intelligence model; selecting a second processor based on an occurrence of a specific event relating to at least one of a change in the at least one content, or a change in a state of the electronic device wherein selecting the second processor further comprises selecting second values from among the plurality of values based on the calculation of the respective costs controlling the second processor to process the at least one content, using a second artificial intelligence model having at least one second parameter, obtained by configuring the at least one parameter of the artificial intelligence model as the at least one second parameter corresponding to the second values However, Palkar explicitly discloses: executing, by at least one processor of the electronic device, an application and obtaining at least one content based on the executed application; (Palkar, Col. 4, Lines 25-29: “In an example, edge devices may comprise security camera applications. In an example, the security camera applications may include battery-powered cameras 70, doorbell cameras 72, outdoor cameras 74, and indoor cameras 76.”, Col. 4, Lines 37-42: “an edge device utilizing a quantized neural network generated in accordance with an embodiment of the invention may take massive amounts of image data and make on-device inferences to obtain useful information with reduced bandwidth and/or reduced power consumption.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kim and Palkar. Kim teaches method and system for bit quantization of artificial neural network. Palkar teaches dynamic quantization for models run on edge devices. One of ordinary skill would have motivation to combine Kim and Palkar because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art. However, Shi explicitly discloses: calculating respective costs output from the artificial intelligence model when the values associated with the computation capability is applied to the artificial intelligence model, the costs are calculated as a function of an accuracy of result data and energy consumption of the artificial intelligence model; (Shi, ¶[0031]: “One method to evaluate the quality of the output is to compute costs. Accuracy cost generator 42 generates an accuracy cost that is a measure of how close to the expected results the current cycle's output is.”, ¶[0032]: “Another cost is the hardware cost of neural network 36. In general, accuracy can be improved by increasing the amount of hardware available in neural network 36, so there is a tradeoff between hardware cost and accuracy cost. Typical hardware cost factors that can be computed by hardware complexity cost generator 44 include a weight decay function to prevent over-fitting when adjusting weight ranges, and a sparsity function to improve the structure and regularity.”, ¶[0063]: “Curves 152, 154 show accuracy vs. weight cost using bit-depth optimization engine.”, ¶[0066-0067]: “A typical neural network training involves minimizing a cost function J(ϴ): PNG media_image2.png 37 585 media_image2.png Greyscale ”) detecting an occurrence of a specific event relating to at least one of a change in the at least one content, or a change in a state of the electronic device; (Shi, ¶[0048]: “FIG. 6B shows spikes in the cost gradient that occur at steps when the number of binary bits changes. Gradient curve 144 is flat for long ranges of weight values between steps. A spike in gradient curve 144 occurs just to the right of each step in quantized weight cost curve 140. This spike indicates that the slope or gradient of quantized weight cost curve 140 changes dramatically when the number of binary bits drops.”) wherein selecting the second processor further comprises selecting second values from among the plurality of values based on the calculation of the respective costs (Shi, ¶[0049]: “Hardware complexity cost generator 44 can generate the gradient of quantized weight cost curve 140 for each weight, and bit-depth optimization engine 48 can search for weights with high cost gradients and select these for reduction during training optimization. Over many cycles of selection, weights can be reduced and traded off with accuracy until an optimal choice of weight values is obtained for low-bit-depth neural network 40.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kim and Shi. Kim teaches method and system for bit quantization of artificial neural network. Shi teaches evaluating an AI model based on multiple cost metrics. One of ordinary skill would have motivation to combine Kim and Shi to improve the system by enabling it to more efficiently trade off model accuracy and computation burden, thereby selecting values that better satisfy performance constraints while reducing hardware complexity and energy consumption. However, Liu explicitly discloses: controlling the first processor to process the at least one content, using a first artificial intelligence model having at least one first parameter, obtained by configuring at least one parameter of an artificial intelligence model stored in the electronic device as the at least one first parameter corresponding to the first values; (Liu, Col. 38, Lines 37-38: “the control device is configured to monitor a state of the artificial intelligence chip.”, Col. 39, Lines 22-24: “The control device is electrically connected with the artificial intelligence chip. The control device is configured to monitor the state of the artificial intelligence chip.”, Col. 39, Lines 33-36: “The control device may be capable of regulating the working states of the multiple processing chips, the multiple processing chips, and the multiple processing circuits in the artificial intelligence chip.”, Col. 32, Lines 9-16: “In the technical scheme, the operating system of a general purpose processor (such as CPU) generates an instruction based on the present technical scheme, and then sends the generated instruction to an artificial intelligence processor chip (such as GPU). The artificial intelligence processor chip performs an instruction operation to determine a neural network quantization parameter and perform quantization.”) [Examiner’s note: “the first processor” is being interpreted as the first processing chip among the multiple processing chips] controlling the second processor to process the at least one content, using a second artificial intelligence model having at least one second parameter, obtained by configuring the at least one parameter of the artificial intelligence model as the at least one second parameter corresponding to the second values. (Liu, Col. 38, Lines 37-38: “the control device is configured to monitor a state of the artificial intelligence chip.”, Col. 39, Lines 22-24: “The control device is electrically connected with the artificial intelligence chip. The control device is configured to monitor the state of the artificial intelligence chip.”, Col. 39, Lines 33-36: “The control device may be capable of regulating the working states of the multiple processing chips, the multiple processing chips, and the multiple processing circuits in the artificial intelligence chip.”, Col. 32, Lines 9-16: “In the technical scheme, the operating system of a general purpose processor (such as CPU) generates an instruction based on the present technical scheme, and then sends the generated instruction to an artificial intelligence processor chip (such as GPU). The artificial intelligence processor chip performs an instruction operation to determine a neural network quantization parameter and perform quantization.”) [Examiner’s note: “the second processor” is being interpreted as the second processing chip among the multiple processing chips] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kim and Liu. Kim teaches method and system for bit quantization of artificial neural network. Liu teaches neural network quantization parameter determination method. One of ordinary skill would have motivation to combine Kim and Liu to quantify the trade-off between accuracy and power consumption, enabling smarter, context-aware decisions for optimal processing in performance and power sensitive system. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMY TRAN whose telephone number is (571)270-0693. The examiner can normally be reached Monday - Friday 7:30 am - 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMY TRAN/Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Mar 30, 2022
Application Filed
May 16, 2025
Non-Final Rejection — §101, §103
Aug 14, 2025
Examiner Interview Summary
Aug 14, 2025
Applicant Interview (Telephonic)
Aug 19, 2025
Response Filed
Dec 05, 2025
Final Rejection — §101, §103
Feb 09, 2026
Request for Continued Examination
Feb 22, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602582
DYNAMIC DISTRIBUTED TRAINING OF MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12468932
IDENTIFYING RELATED MESSAGES IN A NATURAL LANGUAGE INTERACTION
2y 5m to grant Granted Nov 11, 2025
Patent 12462185
SCENE GRAMMAR BASED REINFORCEMENT LEARNING IN AGENT TRAINING
2y 5m to grant Granted Nov 04, 2025
Patent 12423589
TRAINING DECISION TREE-BASED PREDICTIVE MODELS
2y 5m to grant Granted Sep 23, 2025
Patent 12288074
GENERATING AND PROVIDING PROPOSED DIGITAL ACTIONS IN HIGH-DIMENSIONAL ACTION SPACES USING REINFORCEMENT LEARNING MODELS
2y 5m to grant Granted Apr 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
84%
With Interview (+47.9%)
5y 2m
Median Time to Grant
High
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month