DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Amendment
Applicant submitted amendments on 1/30/2026. The Examiner acknowledges the amendment and has reviewed the claims accordingly.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The IDS(s) dated 9/5/2025 that has been previously considered remains placed in the application file.
The IDS dated 1/15/2026 has been considered and placed in the application file.
Overview
Claims 1-14 are pending in this application and have been considered below.
Claims 1-14 are rejected.
Applicant Arguments
In regards to Argument 1, Applicant states amended independent claim 12 now does not recite “a quantization unit”, therefore none of the claim elements invoke §112(f) (See Remarks, page 6 under “Response to Rejections under 35 U.S.C. §112”).
In regards to Argument 2, Applicant states amended independent claims 1 and 12 clarify that "the external environment is an environment that is external to the deep learning neural network model," as well as that "the change in the external environment occurs when the deep learning neural network performing an inference in an environment different from an environment corresponding to a calibration data used in an initial quantization process." Therefore, Claim is definitive and particularly points and distinctly claims the subject matter the inventor(s) regards the invention. (See Remarks, page 7 under “Response to Rejections under 35 U.S.C. §112”).
In regards to Argument 3, Applicant states independent claims have been amended to include features not taught by the cited prior art. Specifically, Liu merely discloses determining a variation trend value of a point position parameter corresponding to the data to be quantized in the weight iteration process, i.e., in the training process (See Remarks, page 8 top half).
In regards to Argument 4, Applicant states Wan uses a “random variable” rather than activation maps, therefore fails to teach or suggest amended independent claim 1 (See Remarks, page 8 to 9).
In regards to Argument 5, Applicant states Desappan merely mentions feature maps without linking to external environment changes, therefore fails to teach or suggest amended independent claim 1 (See Remarks, page 9).
Examiner’s Response
In response to Argument 1, with respect to Claim(s) 12, the Examiner has fully considered the Argument and has found it persuasive. However, in light of amendments to Claim 12, new grounds for which the claim is interpreted under 35 U.S.C. §112(f) is detailed below.
In response to Argument 2, with respect to Claim(s) 1-14, the Examiner has fully considered the Argument and has found it persuasive. However, in light of amendments to Claim 12, new grounds for which the claim is rejected under 35 U.S.C. §112(b) is detailed below.
In response to Argument 3, the Examiner respectfully disagrees. Applicant argues that Liu is limited to the training process because it determines a variation trend value of a point position parameter during the weight iteration process.
Under the broadest reasonable interpretation of the amended claim 1, the claimed “detecting a feature change of input data caused by a change in an external environment … from input image data of a quantized-deep learning neural network model … performing an inference in an environment different from an environment corresponding to a calibration data” is not limited to training and expressly encompasses runtime inference-time distribution shifts.
The rejection relies on the combination of Liu, Choi, and Desappan. While Liu in 124-130 and Figs. 6-8 describes variation trend detection in the context of iterative weight updates, Liu itself expressly states that the resulting quantized parameters are used for “training, fine-tuning, or inference” (50, 80, 103). Desappan supplies the missing inference-time teaching through runtime monitoring of activation/feature map ranges on varying deployment inputs (i.e., inputs from a different environment than the initial calibration data) to detect range expansion and trigger recalibration (4-6, 30-35, 48-55).
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
The Examiner interprets the prior art to teach the amended claim.
In response to Argument 4, the Examiner respectfully disagrees. Applicant argues that Wan uses “a random variable” (global slow-down factor) rather than activation maps and therefore fails to teach the amended detection step of independent claim 1.
Wan is not relied upon to teach the activation-map detection limitation. That limitation is taught by Desappan in the primary combination (see response to Argument 5). Wan is applied only to the time-based and position-based environmental changes of dependent claims 2 and 3, and the periodic detection of claim 4. Wan’s Kalman-filter-based estimation of environmental volatility during runtime inference (Abstract, 353-354) directly supports those dependent limitations.
The Examiner interprets the prior art to teach the amended claim.
In response to Argument 5, the Examiner respectfully disagrees. Applicant argues that Desappan merely mentions feature maps without linking them to external environment changes and therefore fails to teach the amended detection step of independent claim 1.
Under BRI, the claimed “change in the external environment” is defined in the amended claim as occurring “when the deep learning neural network performing an inference in an environment different from an environment corresponding to a calibration data used in an initial quantization process. This is precisely the deployment distribution-shift problem Desappan solves.
Desappan in 4-6, 27-35, 40-55 teaches initial quantization based on calibration/training statistics, runtime monitoring of per-layer/per-channel activation/feature map min/max ranges during inference on varying input images, and dynamic update of quantization scales when those ranges expand to prevent overflow/saturation. Different input images encountered in deployment reflect changes in the external environment (lighting, scene content, time of day, position, etc.) relative to the original calibration dataset. Desappan explicitly recognizes that static calibration is insufficient for real-world inference environments and uses activation map statistics to detect and respond to those shifts.
The Examiner interprets the prior art to teach the amended claim.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Such claim limitation(s) is/are:
“the input feature change detector is further configured to detect the feature change comprises detecting whether…” in claim 12.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim(s) 12 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim(s) 12 recite “and wherein the input feature change detector is further configured to detect the feature change comprises detecting whether…”. It is unclear what the “input feature change detector” must do.
Claim(s) 13-14 depend either directly or indirectly from the rejection of Claim(s) 12-14, therefore they are also rejected.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 5-8, and 10-12 is/are rejected under 35 U.S.C. 103 as obvious over Liu et al (US 20210286688 A1, hereafter referred to as Liu) in view of Choi et al (US 20210064954 A1, hereafter referred to as Choi), further in view of Desappan et al (US 20190012559 A1, hereafter referred to as Desappan).
Claim 1
Regarding Claim 1, Liu teaches A quantization method comprising:
detecting a feature change of input data caused by a change in an external environment (Liu in ¶124-151 and Fig. 6 discloses detecting variations in data during training iterations.),
performing quantization calibration for the deep learning neural network model to determine a new quantization parameter corresponding to the feature change of input data caused by the change in the external environment (Liu in ¶124-151 and Fig. 6-8 discloses parameters are determined/adjusted based on analyzing results or variation trends to minimize error.); and
updating at least one of the plurality of preset quantization parameters based on the new quantization parameter (Liu in ¶103 and Fig. 6-8 discloses preset bit widths or intervals are updated based on new parameters or errors over iterations.).
Liu does not explicitly teach all of detecting a feature change of input data caused by a change in an external environment, from input image data of a deep learning neural network model quantized based on a plurality of preset quantization parameters, wherein the external environment is an environment that is external to the deep learning neural network model, wherein the change in the external environment occurs when the deep learning neural network performing an inference in an environment different from an environment corresponding to a calibration data used in an initial quantization process, and wherein detecting the feature change comprises detecting whether a change in the input data occurred using any one of a plurality of activation maps of the deep learning neural network model whose activation value changes according to the change in the external environment.
However, Choi teaches from input image data of a deep learning neural network model quantized based on a plurality of preset quantization parameters (Choi in Abstract, ¶27-30 and Fig. 1-2 discloses CNN processes input image data in a quantized model using multiple preset precision options assigned per-layer).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Liu by processing image data using a quantized deep learning model with a plurality of preset parameters that is taught by Choi, since both reference are analogous art in the field of neural network-based image processing; thus, one of ordinary skilled in the art would be motivated to combine the references since Liu’s adaptive quantization calibration responsive to detected variations in input data during training with Choi’s layer-specific preset precision options for quantized image processing yields the predictable result of enabling robust quantization of image data under varying conditions.
Liu in view of Choi does not explicitly teach all of detecting a feature change of input data caused by a change in an external environment, from input image data of a deep learning neural network model quantized based on a plurality of preset quantization parameters, wherein the external environment is an environment that is external to the deep learning neural network model, wherein the change in the external environment occurs when the deep learning neural network performing an inference in an environment different from an environment corresponding to a calibration data used in an initial quantization process, and wherein detecting the feature change comprises detecting whether a change in the input data occurred using any one of a plurality of activation maps of the deep learning neural network model whose activation value changes according to the change in the external environment.
However, Desappan teaches detecting a feature change of input data caused by a change in an external environment (Desappan in FIG.2, ¶4-6, 30-32 discloses runtime detection of input feature/range changes caused by real-world inference inputs that differ from training/calibration data),
from input image data of a deep learning neural network model quantized based on a plurality of preset quantization parameters (Desappan in FIG. 7B-D, ¶23-28, 40 discloses input image data of a CNN model that was initially quantized using preset parameters (min/max ranges) determined from training/calibration data),
wherein the external environment is an environment that is external to the deep learning neural network model (Desappan in FIG. 2, ¶4-6 discloses real-world inference environment that is external to the model and differs from the training/calibration environment),
wherein the change in the external environment occurs when the deep learning neural network performing an inference in an environment different from an environment corresponding to a calibration data used in an initial quantization process (Desappan in ¶4-6, 30-32, 45 discloses change occurs precisely during inference when new input images produce ranges not covered by initial training statistics), and
wherein detecting the feature change comprises detecting whether a change in the input data occurred using any one of a plurality of activation maps of the deep learning neural network model whose activation value changes according to the change in the external environment (The Examiner acknowledges the use of “any one of” in the claim language. Desappan in ¶27, 32-37, 40-48, 55 discloses detection uses min/max ranges statistics on per-layer/per-channel feature maps (activation outputs of convolution layers). The values change with varying input images during inference).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Liu in view of Choi by incorporating the runtime inference-time feature map range monitoring and dynamic re-quantization technique that is taught by Desappan, since both reference are analogous art in the field of quantized deep neural network inference optimization for computer vision tasks; thus, one of ordinary skilled in the art would be motivated to combine the references since Liu in view of Choi’s variation trend-based quantization parameter determination with Desappan’s runtime activation/feature map monitoring and predicted min/max-based dynamic calibration yields the predictable result of providing more accurate and robust quantization when input data distributions shift due to real-world environmental changes.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 5
Regarding Claim 5, Liu in view of Choi, further in view of Desappan teaches The quantization method of claim 1, wherein
the deep learning neural network model is a deep learning neural network model comprising at least one convolution layer (Choi in Fig. 1 discloses a DNN with convolution layers).
Claim 6
Regarding Claim 6, Liu in view of Choi, further in view of Desappan teaches The quantization method of claim 5, wherein detecting the feature change of input data caused by the change in the external environment, from input image data of the quantized deep learning neural network model based on the plurality of preset quantization parameters comprises detecting the feature change of input data caused by the change in the external environment based on the any one activation map of any one of the at least one convolution layer (Desappan in Abstract and ¶4-6, 25-27 discloses quantized DNNs processing image inputs with plural preset parameters; activation map-based range detection in conv layers).
Claim 7
Regarding Claim 7, Liu in view of Choi, further in view of Desappan teaches The quantization method of claim 6, wherein
detecting the feature change of input data caused by the change in the external environment on the based on the any one activation map of any one of the at least one convolution layer comprises:
changing a bias value of any one convolution layer (Liu in ¶47, 56 discloses bias addition in convolution computations); and
detecting the feature change of input data caused by the change in the external environment based on overflow occurring in the any one activation map due to the changed bias value (Liu in ¶42, 63-66, and 88 discloses overflow in activations is detected and prevented after bias addition/change; Desappan in ¶4-6, 32 discloses overflow/saturation risk in activation maps were bias perturbation is a standard probing technique for CNNs where bias uniformly shifts activations to test headroom to saturation).
Claim 8
Regarding Claim 8, Liu in view of Choi, further in view of Desappan teaches The quantization method of claim 7, wherein
the activation map is an activation map, among activation maps corresponding to a plurality of output channels of the any one convolutional layer, in which an activation value changes according to the external environmental change (Desappan in Abstract and ¶4-6, 25-27 discloses quantized DNNs processing image inputs with plural preset parameters; activation map-based range detection in conv layers).
Claim 10
Regarding Claim 10, Liu in view of Choi, further in view of Desappan teaches The quantization method of claim 1, wherein
performing quantization calibration for the deep learning neural network model to determine a new quantization parameter corresponding to the feature change of input data caused by the change in the external environment comprises:
determining any one quantization parameter set corresponding to the feature change of input data among a plurality of pre-generated quantization parameter sets (Liu in ¶103 and Fig. 6-8 discloses preset bit widths or intervals are updated based on new parameters or errors over iterations.).
Claim 11
Regarding Claim 11, Liu in view of Choi, further in view of Desappan teaches The quantization method of claim 10, wherein
the plurality of pre-generated quantization parameter sets comprise quantization parameters for a plurality of layers of the deep learning neural network model determined based on input image data corresponding to individual external environments among a plurality of preset external environments (The Examiner finds that “change in an external environment” is indefinite because it is ambiguous and the specification provides no guidance on its scope. Accordingly, under BRI, this term is interpreted as encompassing any change that could affect the input data, as this is the plain meaning to one of ordinary skill in the art absent a narrower definition. Liu in ¶124-151 and Fig. 6 discloses detecting variations in data during training iterations.).
Claim 12
Regarding Claim 12, Liu teaches A quantization device comprising a processor configured to:
detect a feature change of input data caused by a change in an external environment (Liu in ¶124-151 and Fig. 6 discloses detecting variations in data during training iterations.),
perform quantization calibration for the deep learning neural network model to determine a new quantization parameter corresponding to the feature change of input data caused by the change in the external environment (Liu in ¶124-151 and Fig. 6-8 discloses parameters are determined/adjusted based on analyzing results or variation trends to minimize error.); and
update at least one quantization parameters among the plurality of preset quantization parameters based on the new quantization parameter (Liu in ¶103 and Fig. 6-8 discloses preset bit widths or intervals are updated based on new parameters or errors over iterations.).
Liu does not explicitly teach all of detect a feature change of input data caused by a change in an external environment in a deep learning neural network model quantized based on a plurality of preset quantization parameters, wherein the external environment is an environment that is external to the deep learning neural network model, wherein the change in the external environment occurs when the deep learning neural network performing an inference in an environment different from an environment corresponding to a calibration data used in an initial quantization process, and wherein the input feature change detector is further configured to detect the feature change comprises detecting whether a change in the input data occurred using any one of a plurality of activation maps of the deep learning neural network model whose activation value changes according to the change in the external environment.
However, Choi teaches from in a deep learning neural network model quantized based on a plurality of preset quantization parameters (Choi in Abstract, ¶27-30 and Fig. 1-2 discloses CNN processes input image data in a quantized model using multiple preset precision options assigned per-layer).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Liu by processing image data using a quantized deep learning model with a plurality of preset parameters that is taught by Choi, since both reference are analogous art in the field of neural network-based image processing; thus, one of ordinary skilled in the art would be motivated to combine the references since Liu’s adaptive quantization calibration responsive to detected variations in input data during training with Choi’s layer-specific preset precision options for quantized image processing yields the predictable result of enabling robust quantization of image data under varying conditions.
Liu in view of Choi does not explicitly teach all of detect a feature change of input data caused by a change in an external environment in a deep learning neural network model quantized based on a plurality of preset quantization parameters, wherein the external environment is an environment that is external to the deep learning neural network model, wherein the change in the external environment occurs when the deep learning neural network performing an inference in an environment different from an environment corresponding to a calibration data used in an initial quantization process, and wherein the input feature change detector is further configured to detect the feature change comprises detecting whether a change in the input data occurred using any one of a plurality of activation maps of the deep learning neural network model whose activation value changes according to the change in the external environment.
However, Desappan teaches detect a feature change of input data caused by a change in an external environment (Desappan in FIG.2, ¶4-6, 30-32 discloses runtime detection of input feature/range changes caused by real-world inference inputs that differ from training/calibration data),
in a deep learning neural network model quantized based on a plurality of preset quantization parameters (Desappan in FIG. 7B-D, ¶23-28, 40 discloses input image data of a CNN model that was initially quantized using preset parameters (min/max ranges) determined from training/calibration data),
wherein the external environment is an environment that is external to the deep learning neural network model (Desappan in FIG. 2, ¶4-6 discloses real-world inference environment that is external to the model and differs from the training/calibration environment),
wherein the change in the external environment occurs when the deep learning neural network performing an inference in an environment different from an environment corresponding to a calibration data used in an initial quantization process (Desappan in ¶4-6, 30-32, 45 discloses change occurs precisely during inference when new input images produce ranges not covered by initial training statistics), and
wherein the input feature change detector is further configured to detect the feature change comprises detecting whether a change in the input data occurred using any one of a plurality of activation maps of the deep learning neural network model whose activation value changes according to the change in the external environment (The Examiner acknowledges the use of “any one of” in the claim language. Desappan in ¶27, 32-37, 40-48, 55 discloses detection uses min/max ranges statistics on per-layer/per-channel feature maps (activation outputs of convolution layers). The values change with varying input images during inference).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Liu in view of Choi by incorporating the runtime inference-time feature map range monitoring and dynamic re-quantization technique that is taught by Desappan, since both reference are analogous art in the field of quantized deep neural network inference optimization for computer vision tasks; thus, one of ordinary skilled in the art would be motivated to combine the references since Liu in view of Choi’s variation trend-based quantization parameter determination with Desappan’s runtime activation/feature map monitoring and predicted min/max-based dynamic calibration yields the predictable result of providing more accurate and robust quantization when input data distributions shift due to real-world environmental changes.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim(s) 2-4, 9 is/are rejected under 35 U.S.C. 103 as obvious over Liu et al (US 20210286688 A1, hereafter referred to as Liu) in view of Choi et al (US 20210064954 A1, hereafter referred to as Choi), further in view of Desappan et al (US 20190012559 A1, hereafter referred to as Desappan), further in view of Wan et al (NPL: “ALERT: Accurate Learning for Energy and Timeliness”, hereafter referred to as Wan).
Claim 2
Regarding Claim 2, Liu in view of Choi, further in view of Desappan teaches The quantization method of claim 1,
Liu in view of Choi, further in view of Desappan does not explicitly teach all of the feature change of input data caused by the change in the external environment is a change that occurs in response to a change in time at which the deep learning neural network model performs inference.
However, Wan teaches the feature change of input data caused by the change in the external environment is a change that occurs in response to a change in time at which the deep learning neural network model performs inference (Wan in Abstract, Fig. 1, and page 353/354 right column discloses using a Kalman filter to estimate a global slowdown factor from temporal volatility in streamed sensor data; selects DNN variants with different numeric precisions and adjusts system resources based on the new slowdown estimates).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Liu in view of Choi, further in view of Desappan by interpreting the feature change of input data caused by the change in the external environment as a change that occurs in response to a change in time at which the deep learning neural network model performs inference that is taught by Wan, since both reference are analogous art in the field of neural network-based image processing; thus, one of ordinary skilled in the art would be motivated to combine the references since Liu in view of Choi, further in view of Desappan’s iterative quantization calibration and per-layer preset precision with Wan’s Kalman filter-based estimation of slowdown factors from temporal sensor data yields the predictable result of enabling dynamic quantization adjustments that account for inference timing changes, thereby reducing computational overhead while maintaining accuracy.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 3
Regarding Claim 3, Liu in view of Choi, further in view of Desappan teaches The quantization method of claim 1.
Liu in view of Choi, further in view of Desappan does not explicitly teach all of feature change of input data caused by the change in the external environment is a change that occurs in response to a change in a position at which the deep learning neural network model performs inference.
However, Wan teaches the feature change of input data caused by the change in the external environment is an change that occurs in response to a change in a position at which the deep learning neural network model performs inference (Wan in Abstract, Fig. 1, and page 353/354 right column discloses how in robotic vision systems, a change in the robot’s position changes the external environment, causing feature changes in input data that require adaptation during DNN inference).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Liu in view of Choi, further in view of Desappan by interpreting the feature change of input data caused by the change in the external environment as a change that occurs in response to a change in position at which the deep learning neural network model performs inference that is taught by Wan, since both reference are analogous art in the field of neural network-based image processing; thus, one of ordinary skilled in the art would be motivated to combine the references since Liu in view of Choi, further in view of Desappan’s quantization calibration responsive to general external feature variations in input data with Wan’s position-induced environmental shifts yields the predictable result of facilitating spatially aware quantization updates that handle position-dependent input changes during inference, thereby enhancing model robustness and efficiency.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 4
Regarding Claim 4, Liu in view of Choi, further in view of Desappan teaches The quantization method of claim 1.
Liu in view of Choi, further in view of Desappan does not explicitly teach all of detecting the feature change of input data caused by the change in the external environment, from input image data of the quantized deep learning neural network model based on the plurality of preset quantization parameters is performed at a preset time interval while the deep learning neural network model performs inference.
However, Wan teaches detecting the feature change of input data caused by the change in the external environment, from input image data of the quantized deep learning neural network model based on the plurality of preset quantization parameters is performed at a preset time interval while the deep learning neural network model performs inference (Wan in Abstract, Fig. 1, and page 353/354 right column discloses runtime measurements of inference performance performed after each inference task in dynamic environments, where tasks occur at preset intervals while the DNN model continuously performs inference).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Liu in view of Choi, further in view of Desappan by detecting the feature change caused by the external environment at a preset time interval while the deep learning neural network model performs inference that is taught by Wan, since both reference are analogous art in the field of neural network-based image processing; thus, one of ordinary skilled in the art would be motivated to combine the references since Liu in view of Choi, further in view of Desappan’s quantization calibration for external feature changes in quantized image processing DNNs with Wan’s periodic runtime measurements at present intervals during DNN inference tasks yields the predictable result of enabling scheduled detection and adaptation of quantization parameters to environmental shifts, thereby improving responsiveness in real-time applications without interrupting inference flow.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 9
Regarding Claim 9, Liu in view of Choi, further in view of Desappan teaches The quantization method of claim 1;
determining a new input quantization parameter based on the first input quantization parameters and a second input quantization parameter included in the plurality of preset quantization parameters (Liu in ¶103 and Fig. 6-8 discloses preset bit widths or intervals are updated based on new parameters or errors over iterations.).
Liu in view of Choi, further in view of Desappan does not explicitly teach all of performing quantization calibration for the deep learning neural network model to determine a new quantization parameter corresponding to the feature change of input data caused by the change in the external environment comprises: calculating first input quantization parameters based on a plurality of input image data corresponding to a preset time section.
However, Wan teaches performing quantization calibration for the deep learning neural network model to determine a new quantization parameter corresponding to the feature change of input data caused by the change in the external environment comprises:
calculating first input quantization parameters based on a plurality of input image data corresponding to a preset time section (Wan in Abstract, Fig. 1, and page 353/354 right column discloses runtime measurements of inference performance performed after each inference task in dynamic environments, where tasks occur at preset intervals while the DNN model continuously performs inference)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Liu in view of Choi, further in view of Desappan by performing quantization calibration that calculates first input quantization parameters based on a plurality of input image data corresponding to a preset time section that is taught by Wan, since both reference are analogous art in the field of neural network-based image processing; thus, one of ordinary skilled in the art would be motivated to combine the references since Liu in view of Choi, further in view of Desappan’s iterative updating of preset quantization parameters from error-based analysis with Wan’s runtime aggregation of inference measurements over preset time intervals for continuous DNN tasks yields the predictable result of robust, time-section specific parameter calculations, thereby enhancing calibration accuracy and reducing latency.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as obvious over Liu et al (US 20210286688 A1, hereafter referred to as Liu) in view of Choi et al (US 20210064954 A1, hereafter referred to as Choi), further in view of Desappan et al (US 20190012559 A1, hereafter referred to as Desappan), further in view of Lowell et al (US 20190188557 A1, hereafter referred to as Lowell).
Claim 13
Regarding Claim 13, Liu in view of Choi, further in view of Desappan teaches The quantization device of claim 12.
Liu in view of Choi, further in view of Desappan does not explicitly teach all of the input feature change detector performs an operation for detecting the feature change of input data caused by the change in the external environment using at least one channel among remaining channels excluding M channels allocated for operation of the deep learning neural network model among N channels of a parallel processor that performs operation of the deep learning neural network model.
However, Lowell teaches the input feature change detector performs an operation for detecting the feature change of input data caused by the change in the external environment using at least one channel among remaining channels excluding M channels allocated for operation of the deep learning neural network model among N channels of a parallel processor that performs operation of the deep learning neural network model (Lowell in ¶21-26 discloses parallel processor with multiple channels; in a parallel system like the APD, core DLNN operations naturally occupy M channels while remaining channels handle preparatory computations).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Liu in view of Choi, further in view of Desappan by performing input feature change detection using at least one channel among remaining channels excluding M channels allocated for the deep learning neural network model operation among N channels of a parallel processor that is taught by Lowell, since both reference are analogous art in the field of neural network-based image processing; thus, one of ordinary skilled in the art would be motivated to combine the references since Liu in view of Choi, further in view of Desappan’s feature change detection in environment shifts for quantized DNN devices with Lowell’s multi-channel allocation in parallel processors where tasks occupy dedicated channels and spares handle preparatory detection yields the predictable result of leveraging underutilized processor channels without disrupting primary DNN computations, thereby enhancing model efficiency.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as obvious over Liu et al (US 20210286688 A1, hereafter referred to as Liu) in view of Choi et al (US 20210064954 A1, hereafter referred to as Choi), further in view of Desappan et al (US 20190012559 A1, hereafter referred to as Desappan), further in view of Lowell et al (US 20190188557 A1, hereafter referred to as Lowell), further in view of Wang et al (US 20210334142 A1, hereafter referred to as Wang).
Claim 14
Regarding Claim 14, Liu in view of Choi, further in view of Desappan, further in view of Lowell teaches The quantization device of claim 13.
Liu in view of Choi, further in view of Desappan, further in view of Lowell does not explicitly teach all of the parallel processor is a parallel processor having a systolic array structure.
However, Wang teaches the parallel processor is a parallel processor having a systolic array structure (Wang in Abstract and ¶35 discloses parallel processor and systolic array).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Liu in view of Choi, further in view of Desappan, further in view of Lowell by specifying the parallel processor as a parallel processor having a systolic array structure that is taught by Wang, since both reference are analogous art in the field of neural network-based image processing; thus, one of ordinary skilled in the art would be motivated to combine the references since Liu in view of Choi, further in view of Desappan, further in view of Lowell, further in view of Lowell’s multi-channel parallel processor allocation for DNN operations and feature change detection with Wang’s systolic array architecture yields the predictable result of optimizing quantization and detection.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUSTIN P CASCAIS whose telephone number is (703)756-5576. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mr. O’Neal Mistry can be reached on (313) 446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.P.C./Examiner, Art Unit 2674
/ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674
Date: 2/18/2026