Prosecution Insights
Last updated: April 19, 2026
Application No. 17/179,781

METHOD AND APPARATUS WITH CONVOLUTION OPERATION PROCESSING BASED ON REDUNDANCY REDUCTION

Non-Final OA §103
Filed
Feb 19, 2021
Examiner
MCINTOSH, ANDREW T
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
95%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
393 granted / 511 resolved
+21.9% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
538
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 511 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to Applicant’s Request for Continued Examination ("Response”) received on January 27, 2025 in response to the Office Action dated November 25, 2024. This action is made Non-Final. Claims 1-26 are pending. Claims 1, 13, 14, 20, 21, and 25 are independent claims. Claims 1-26 are rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant’s Response In Applicant’s Response, Applicant amended claims 1, 14, 21, and 25, and submitted arguments against the prior art in the Office Action dated November 25, 2024. Information Disclosure Statement The information disclosure statement (IDS(s)) submitted on 02/21/2025 and 09/23/2025 is/are in compliance with the provisions of 37 C.F.R. 1.97. Accordingly, the IDS(s) is/are being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 11-14, 19-21, and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Botimer et al., US Publication 2021/0011732 (“Botimer”), and further in view of Mathew et al., US Publication 2018/0181864 (“Mathew”). Claim 1: Botimer teaches or suggests a processor implemented neural network layer convolution operation method, the method comprising: obtaining a first input plane of an input feature map and a first weight plane of a weight kernel (see Fig. 1-4B; para. 0002 - exemplary convolution of a weights matrix 110 with an input feature map matrix 120 according to the convention art; para. 0005 – current weight value (0,0,0) and a current input feature map value (0,0,0). be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights.); generating base planes, corresponding to an intermediate operation between ... of the first input plane and ... at least a portion of available weight values of the weight kernel (see Fig. 1-4B; para. 0005 - convolution can begin with loading a current weight value (0,0,0) and a current input feature map value (0,0,0) from memory 210 into a multiply and accumulation unit. be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights.); generating first accumulation data based on at least one plane corresponding to weight element values of the first weight plane among the first input plane and the base planes (see Fig. 1-4B; para. multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0) during the first cycle (T=0). At 330, the operations at 310 and 320 can be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights.); and generating a first output plane of an output feature map based on the first accumulation data (see Fig. 1-4B; para. 0006 - current accumulated value from the multiply and accumulate unit can be output as a corresponding output feature map value. output as a corresponding output feature map value (1,1 ,0) in a first output channel of the output feature map.). Mathew more specifically teaches or suggests generating corresponding to an intermediate operation between elements of the first input plane and a respective one of at least a portion of available weight values of the weight kernel (see para. 0024 - coefficients or weights of the filters; para. 0046 - Each data element of a block of the feature map corresponding to the coefficient value is then multiplied 1002 by the coefficient value and the results of the multiplications are added 1004 to corresponding data elements in a block of the output feature map; para. 0047 – A block of a feature map corresponds to a coefficient value when all of the data elements in the block would be multiplied by the coefficient value as the filter is applied across the feature map using the prior art convolution; para. 0080 - multiplies each of the values in all or a subset of an input feature map/image/data block by the respective weight value and accumulates the product of the multiplication with a corresponding value of a previous feature map to generate a respective output value. for each of the values in all or a subset of an input feature map/image/data block with respect to a single weight value.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include generating corresponding to an intermediate operation between elements of the first input plane and a respective one of at least a portion of available weight values of the weight kernel for the purpose of efficiently carrying out convolution operations by performing operations for an individual weight value prior to carrying out convolution operations for other weight values for the same feature map, allowing convolution operations to be skipped, improving convolutional neural network performance, as taught by Mathew (0033, 0077, and 0080). Claim(s) 13 and 14: Claim(s) 13 and 14 correspond to Claim 1, and thus, Botimer and Mathew teach or suggest the limitations of claim(s) 13 and 14 as well. Claim 11: Botimer further teaches or suggests wherein the first input plane and the first weight plane correspond to a first input channel among a plurality of input channels, and the first output plane corresponds to a first output channel among a plurality of output channels (see Fig. 1-4B; para. 0002 - exemplary convolution of a weights matrix 110 with an input feature map matrix 120 according to the convention art; para. 0005 – current weight value (0,0,0) and a current input feature map value (0,0,0). be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights. parameters of the weights matrix 110, the input feature map matrix 120 and the output feature map 130 are set forth in Table 1.; para. 0006 - current accumulated value from the multiply and accumulate unit can be output as a corresponding output feature map value. output as a corresponding output feature map value (1,1 ,0) in a first output channel of the output feature map.). Claim 12: Botimer further teaches or suggests generating second accumulation data based on a second input plane of the input feature map and a second weight plane of the weight kernel, wherein the generating of the first output plane comprises generating the first output plane by accumulating the first accumulation data and the second accumulation data (see Fig. 1-4B; para. 0005 - computation of the convolution can begin with loading a current weight value (0,0,0) and a current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0) during the first cycle (T=0). At 330, the operations at 310 and 320 can be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights. At 340, the operations at 310-330 can be iterated through kernel height and kernel width of the weights, and corresponding map width and map height of the input feature map. For example, at a second cycle (T=l), a second weight value (0,0,1) and a second input feature map value (0,0, 1) can be loaded from memory into the multiply and accumulate unit 240. The product 410 of the current weight value and the current input feature map value can be added 420 to the accumulated value from the first cycle and held in the accumulator.). Claim 19: Botimer further teaches or suggests further comprising a memory storing instructions that, when executed by the processor, configure the processor to perform the obtaining of the first input plane, the generating of the base planes, the generating of the first accumulation data, and the generating of the first output plane (see Fig. 1-5.). Claim 20: As indicated above, Botimer and Mathew teach or suggest the apparatus of claim 14. Mathew further teaches or suggests and a camera configured to generate an input image based on detected visual information, wherein the apparatus of claim 14 is a processor, and the input feature map corresponds to the input image (see Fig. 15; para. 0024 - coefficients or weights of the filters; para. 0046 - Each data element of a block of the feature map corresponding to the coefficient value is then multiplied 1002 by the coefficient value and the results of the multiplications are added 1004 to corresponding data elements in a block of the output feature map; para. 0047 – A block of a feature map corresponds to a coefficient value when all of the data elements in the block would be multiplied by the coefficient value as the filter is applied across the feature map using the prior art convolution; para. 0068 - BMA-based convolution kernel as described herein in which the CNN is trained to process frames captured by the camera; para. 0080 - multiplies each of the values in all or a subset of an input feature map/image/data block by the respective weight value and accumulates the product of the multiplication with a corresponding value of a previous feature map to generate a respective output value. for each of the values in all or a subset of an input feature map/image/data block with respect to a single weight value.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include and a camera configured to generate an input image based on detected visual information, wherein the apparatus of claim 14 is a processor, and the input feature map corresponds to the input image for the purpose of efficiently carrying out convolution operations by performing operations for an individual weight value prior to carrying out convolution operations for other weight values for the same feature map, allowing convolution operations to be skipped, improving convolutional neural network performance, as taught by Mathew (0033,0077, and 0080). Claim 21: Botimer teaches or suggests a processor configured to obtain a first input plane of an input feature map corresponding to the input image and a first weight plane of a weight kernel (see Fig. 1-4B; para. 0002 - exemplary convolution of a weights matrix 110 with an input feature map matrix 120 according to the convention art; para. 0005 – current weight value (0,0,0) and a current input feature map value (0,0,0). be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights.); generate base planes corresponding to an intermediate operation ... of the first input plane and ... at least a portion of available weight values of the weight kernel (see Fig. 1-4B; para. 0005 - convolution can begin with loading a current weight value (0,0,0) and a current input feature map value (0,0,0) from memory 210 into a multiply and accumulation unit. be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights.); generate first accumulation data based on at least one plane corresponding to weight element values of the first weight plane among the first input plane and the base planes (see Fig. 1-4B; para. multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0) during the first cycle (T=0). At 330, the operations at 310 and 320 can be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights.); and generate a first output plane of an output feature map based on the first accumulation data (see Fig. 1-4B; para. 0006 - current accumulated value from the multiply and accumulate unit can be output as a corresponding output feature map value. output as a corresponding output feature map value (1,1 ,0) in a first output channel of the output feature map.). Mathew more specifically teaches or suggests a camera configured to generate an input image based on detected visual information; generate corresponding to an intermediate operation between each element of the first input plane and a respective one of at least a portion of available weight values of the weight kernel (see Fig. 15; para. 0024 - coefficients or weights of the filters; para. 0046 - Each data element of a block of the feature map corresponding to the coefficient value is then multiplied 1002 by the coefficient value and the results of the multiplications are added 1004 to corresponding data elements in a block of the output feature map; para. 0047 – A block of a feature map corresponds to a coefficient value when all of the data elements in the block would be multiplied by the coefficient value as the filter is applied across the feature map using the prior art convolution; para. 0068 - BMA-based convolution kernel as described herein in which the CNN is trained to process frames captured by the camera; para. 0080 - multiplies each of the values in all or a subset of an input feature map/image/data block by the respective weight value and accumulates the product of the multiplication with a corresponding value of a previous feature map to generate a respective output value. for each of the values in all or a subset of an input feature map/image/data block with respect to a single weight value.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include generating corresponding to multiplication results between each element of the input plane and a respective one or available weight values of the weight kernel for the purpose of efficiently carrying out convolution operations by performing operations for an individual weight value prior to carrying out convolution operations for other weight values for the same feature map, allowing convolution operations to be skipped, improving convolutional neural network performance, as taught by Mathew (0033,0077, and 0080). Claim 25: Botimer teaches or suggests a processor-implemented neural network layer convolution operation method, the method comprising: obtaining an input plane of an input feature map and a weight plane of a weight kernel (see Fig. 1-4B; para. 0002 - exemplary convolution of a weights matrix 110 with an input feature map matrix 120 according to the convention art; para. 0005 – current weight value (0,0,0) and a current input feature map value (0,0,0). be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights.); generating base planes corresponding to multiplication results between ... of the input plane and ... available weight values of the weight kernel (see Fig. 1-4B; para. 0005 - convolution can begin with loading a current weight value (0,0,0) and a current input feature map value (0,0,0) from memory 210 into a multiply and accumulation unit. be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights.); determining target regions among the base planes and the input plane that correspond to wight elements of the weight plane, based on weight values of the weight elements and positions of the weight elements in the weight plane (see Fig. 1-4B; para. 0005 - computation of the convolution can begin with loading a current weight value (0,0,0) and a current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0) during the first cycle (T=0). At 330, the operations at 310 and 320 can be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights. At 340, the operations at 310-330 can be iterated through kernel height and kernel width of the weights, and corresponding map width and map height of the input feature map. For example, at a second cycle (T=l), a second weight value (0,0,1) and a second input feature map value (0,0, 1) can be loaded from memory into the multiply and accumulate unit 240. The product 410 of the current weight value and the current input feature map value can be added 420 to the accumulated value from the first cycle and held in the accumulator.); and generating a portion of an output plane of an output feature map by accumulating the target regions (see Fig. 1-4B; para. 0005 - computation of the convolution can begin with loading a current weight value (0,0,0) and a current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0) during the first cycle (T=0). At 330, the operations at 310 and 320 can be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights. At 340, the operations at 310-330 can be iterated through kernel height and kernel width of the weights, and corresponding map width and map height of the input feature map. For example, at a second cycle (T=l), a second weight value (0,0,1) and a second input feature map value (0,0, 1) can be loaded from memory into the multiply and accumulate unit 240. The product 410 of the current weight value and the current input feature map value can be added 420 to the accumulated value from the first cycle and held in the accumulator.). Mathew more specifically teaches or suggests generating corresponding to multiplication results between each element of the input plane and a respective one or available weight values of the weight kernel (see para. 0024 - coefficients or weights of the filters; para. 0046 - Each data element of a block of the feature map corresponding to the coefficient value is then multiplied 1002 by the coefficient value and the results of the multiplications are added 1004 to corresponding data elements in a block of the output feature map; para. 0047 – A block of a feature map corresponds to a coefficient value when all of the data elements in the block would be multiplied by the coefficient value as the filter is applied across the feature map using the prior art convolution; para. 0080 - multiplies each of the values in all or a subset of an input feature map/image/data block by the respective weight value and accumulates the product of the multiplication with a corresponding value of a previous feature map to generate a respective output value. for each of the values in all or a subset of an input feature map/image/data block with respect to a single weight value.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include generating corresponding to multiplication results between each element of the input plane and a respective one or available weight values of the weight kernel for the purpose of efficiently carrying out convolution operations by performing operations for an individual weight value prior to carrying out convolution operations for other weight values for the same feature map, allowing convolution operations to be skipped, improving convolutional neural network performance, as taught by Mathew (0033,0077, and 0080). Claim(s) 2, 5, 15, and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Botimer, in view of Mathew, and further in view of Han, US Publication 2019/0362522 (“Han”). Claim 2: Botimer further teaches or suggests determining a first target plane corresponding to a weight value of a first weight element of the first weight plane among the first input plane and the base planes; generating the first accumulation data by performing an accumulation operation based on target elements of the first target region (see Fig. 1-4B; para. 0005 - computation of the convolution can begin with loading a current weight value (0,0,0) and a current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0) during the first cycle (T=0). At 330, the operations at 310 and 320 can be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights. At 340, the operations at 310-330 can be iterated through kernel height and kernel width of the weights, and corresponding map width and map height of the input feature map. For example, at a second cycle (T=l), a second weight value (0,0,1) and a second input feature map value (0,0, 1) can be loaded from memory into the multiply and accumulate unit 240. The product 410 of the current weight value and the current input feature map value can be added 420 to the accumulated value from the first cycle and held in the accumulator.). Botimer does not explicitly disclose determining a first target region in the first target plane based on an offset of the first weight element. Han teaches or suggests determining a first target region in the first target plane based on an offset of the first weight element (see Fig. 7A, 7B; para. 0110 - learnable filter function may be defined by a matrix of weights W, where each weight is to be applied to an image pixel during a convolution operation, and an offset value b. Weight matrix Wand offset b are among the model parameters that need to be learned during the training stage; para. 0112 - exemplary 3x3 filter function (comprising weight matrix 711a and offset value 711b) and activation function are applied to an exemplary 4x4 input feature map 713a to generate a 4x4 output feature map 713c. offset value 711b may be a single offset value. generate output feature map 713c by summing each element of intermediate map 713b with offset value 711b. output feature map 713c may be computed as the learnable filter slides to overlap with different portions of input feature map; para. 0113 - corresponding value may be summed with an offset value, and an activation function may be applied to the result to generate a three-dimensional output feature map; para. 0116 - learnable filter function may be described by a weight matrix and an offset value.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include determining a first target region in the first target plane based on an offset of the first weight element for the purpose of efficiently using learned model parameters during convolution operations, improving CNN results, as taught by Han (0108-0113). Claim(s) 15: Claim(s) 15 correspond to Claim 2, and thus, Botimer, Mathew, and Han teach or suggest the limitations of claim(s) 15 as well. Claim 5: Botimer further teaches or suggests wherein generating first accumulation data further comprises: determining a second target plane corresponding to a weight value of a second weight element of the first weight plane among the first input plane and the base planes; and the performing of the accumulation operation comprises accumulating target elements of the first target region and corresponding target elements of the second target region (see Fig. 1-4B; para. 0005 - computation of the convolution can begin with loading a current weight value (0,0,0) and a current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0) during the first cycle (T=0). At 330, the operations at 310 and 320 can be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights. At 340, the operations at 310-330 can be iterated through kernel height and kernel width of the weights, and corresponding map width and map height of the input feature map. For example, at a second cycle (T=l), a second weight value (0,0,1) and a second input feature map value (0,0, 1) can be loaded from memory into the multiply and accumulate unit 240. The product 410 of the current weight value and the current input feature map value can be added 420 to the accumulated value from the first cycle and held in the accumulator.). Botimer does not explicitly disclose determining a second target region in the second target plane based on an offset of the second weight element. Han teaches or suggests determining a second target region in the second target plane based on an offset of the second weight element (see Fig. 7A, 7B; para. 0110 - learnable filter function may be defined by a matrix of weights W, where each weight is to be applied to an image pixel during a convolution operation, and an offset value b. Weight matrix Wand offset b are among the model parameters that need to be learned during the training stage; para. 0112 - exemplary 3x3 filter function (comprising weight matrix 711a and offset value 711b) and activation function are applied to an exemplary 4x4 input feature map 713a to generate a 4x4 output feature map 713c. offset value 711b may be a single offset value. generate output feature map 713c by summing each element of intermediate map 713b with offset value 711b. output feature map 713c may be computed as the learnable filter slides to overlap with different portions of input feature map; para. 0113 - corresponding value may be summed with an offset value, and an activation function may be applied to the result to generate a three-dimensional output feature map; para. 0116 - learnable filter function may be described by a weight matrix and an offset value.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include determining a second target region in the second target plane based on an offset of the second weight element for the purpose of efficiently using learned model parameters during convolution operations, improving CNN results, as taught by Han (0108-0113). Claim 22: Botimer further teaches or suggests determining a first target plane corresponding to a weight value of a first weight element of the first weight plane among the first input plane and the base planes; generating the first accumulation data by performing an accumulation operation based on target elements of the first target region (see Fig. 1-4B; para. 0005 - computation of the convolution can begin with loading a current weight value (0,0,0) and a current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0) during the first cycle (T=0). At 330, the operations at 310 and 320 can be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights. At 340, the operations at 310-330 can be iterated through kernel height and kernel width of the weights, and corresponding map width and map height of the input feature map. For example, at a second cycle (T=l), a second weight value (0,0,1) and a second input feature map value (0,0, 1) can be loaded from memory into the multiply and accumulate unit 240. The product 410 of the current weight value and the current input feature map value can be added 420 to the accumulated value from the first cycle and held in the accumulator.). Botimer does not explicitly disclose determining a first target region in the first target plane based on an offset of the first weight element. Han teaches or suggests determining a first target region in the first target plane based on an offset of the first weight element (see Fig. 7A, 7B; para. 0110 - learnable filter function may be defined by a matrix of weights W, where each weight is to be applied to an image pixel during a convolution operation, and an offset value b. Weight matrix Wand offset b are among the model parameters that need to be learned during the training stage; para. 0112 - exemplary 3x3 filter function (comprising weight matrix 711a and offset value 711b) and activation function are applied to an exemplary 4x4 input feature map 713a to generate a 4x4 output feature map 713c. offset value 711b may be a single offset value. generate output feature map 713c by summing each element of intermediate map 713b with offset value 711b; para. 0113 - corresponding value may be summed with an offset value, and an activation function may be applied to the result to generate a three-dimensional output feature map; para. 0116 - learnable filter function may be described by a weight matrix and an offset value.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include determining a first target region in the first target plane based on an offset of the first weight element for the purpose of efficiently using learned model parameters during convolution operations, improving CNN results, as taught by Han (0108-0113). Claim(s) 3, 7, 16, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Botimer, in view of Mathew, in view of Han, and further in view of Duong, US Patent 11,049,013 (“Duong”). Claim 3: Duong more specifically teaches or suggests determining the first target region using a first pointer pointing to the first target region among pointers pointing to different target regions of the first target plane based on the offset of the first weight element (see Fig. 33; col. 0062, lines 10-19 - identifies (at 3310) the memory address and offset for the next filter slice in the weight block. In some embodiments, this data is provided as configuration data for the first filter slice in a pass, and then updated for each subsequent filter slice during the pass based on how much weight data is stored for each previous filter slice (i.e., based on the number of non-zero weights in the filter slice). Using the identified memory address and offset, the process 3300 reads (at 3315) the next slice identifier; col. 63, lines 47-50 – identify the memory address and offset for the next filter slice (i.e., by increasing the offset by the amount of data read). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include determining the first target region using a first pointer pointing to the first target region among pointers pointing to different target regions of the first target plane based on the offset of the first weight element for the purpose of efficiently determining subsequent slices in a CNN, improving CNN operation, as taught by Duong (col. 62 and 63). Claim(s) 16: Claim(s) 16 correspond to Claim 3, and thus, Botimer, Mathew, Han, and Duong teach or suggest the limitations of claim(s) 16 as well. Claim 7: Duong further teaches or suggests wherein the offset of the fist weight element corresponds to a position of the first weight element in the first weight plane (see Fig. 33; col. 0062, lines 10-19 - identifies (at 3310) the memory address and offset for the next filter slice in the weight block. In some embodiments, this data is provided as configuration data for the first filter slice in a pass, and then updated for each subsequent filter slice during the pass based on how much weight data is stored for each previous filter slice (i.e., based on the number of non-zero weights in the filter slice). Using the identified memory address and offset, the process 3300 reads (at 3315) the next slice identifier; col. 63, lines 47-50 – identify the memory address and offset for the next filter slice (i.e., by increasing the offset by the amount of data read). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include wherein the offset of the fist weight element corresponds to a position of the first weight element in the first weight plane for the purpose of efficiently determining subsequent slices in a CNN, improving CNN operation, as taught by Duong (col. 62 and 63). Claim 23: Duong more specifically teaches or suggests determining the first target region using a first pointer pointing to the first target region among pointers pointing to different target regions of the first target plane based on the offset of the first weight element (see Fig. 33; col. 0062, lines 10-19 - identifies (at 3310) the memory address and offset for the next filter slice in the weight block. In some embodiments, this data is provided as configuration data for the first filter slice in a pass, and then updated for each subsequent filter slice during the pass based on how much weight data is stored for each previous filter slice (i.e., based on the number of non-zero weights in the filter slice). Using the identified memory address and offset, the process 3300 reads (at 3315) the next slice identifier; col. 63, lines 47-50 – identify the memory address and offset for the next filter slice (i.e., by increasing the offset by the amount of data read). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include determining the first target region using a first pointer pointing to the first target region among pointers pointing to different target regions of the first target plane based on the offset of the first weight element for the purpose of efficiently determining subsequent slices in a CNN, improving CNN operation, as taught by Duong (col. 62 and 63). Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Botimer, in view of Mathew, in view of Han, and further in view of Croxford, US Publication 2020/0110995 (“Croxford”). Claim 4: Botimer further teaches or suggests wherein each of the base planes correspond to a respective available weight value among the portion of available weight values, and the determining of the first target plane comprises determining, as the first target plane, a base plane corresponding to an available weight value (see Fig. 1-4B; para. 0005 - computation of the convolution can begin with loading a current weight value (0,0,0) and a current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0). multiply and accumulate operation can be performed using the current weight value and the current input feature map value to generate a corresponding current accumulated value. For example, the multiply and accumulate unit 210 can accumulate the product of the current weight value (0,0,0) and the current input feature map value (0,0,0) during the first cycle (T=0). At 330, the operations at 310 and 320 can be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights. At 340, the operations at 310-330 can be iterated through kernel height and kernel width of the weights, and corresponding map width and map height of the input feature map. For example, at a second cycle (T=l), a second weight value (0,0,1) and a second input feature map value (0,0, 1) can be loaded from memory into the multiply and accumulate unit 240. The product 410 of the current weight value and the current input feature map value can be added 420 to the accumulated value from the first cycle and held in the accumulator.). Botimer does not explicitly disclose equal to an absolute value of the weight value of the first element. Croxford teaches or suggests equal to an absolute value of the weight value of the first element (see para. 0041 - number of ways to order a kernel set, however one way is to calculate an absolute sum of the weights of each channel in the kernel; para. 0042 - absolute sum of the weights of each portion of the kernel is calculated and then the kernels having a higher absolute sum, representing the kernels which have the most significant impact when processed, are placed higher in the ordering than those having a lower absolute sum; para. 0062 - method of ordering the kernel channels 520 is to calculate an absolute sum of the weights of each channel of the kernel A,B,C. For example, kernel channel A, based on the weights shown in FIG. 5, would have an absolute sum of 5, kernel channel B will have an absolute sum of 2, and kernel channel C will have an absolute sum of 4; 0063 - kernel channels 520 are ordered according to their absolute sum of weights.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include equal to an absolute value of the weight value of the first element for the purpose of efficiently determining order in a CNN based on weight values, improving CNN operation, as taught by Croxford (0041 and 0062). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Botimer, in view of Mathew, in view of Han, and further in view of Diamond et al., US Publication 2019/0079764 (“Diamond”). Claim 6: As indicated above, Botimer teaches or suggests the first target region. Daimond further teaches or suggests wherein the first target region corresponds to one-dimensional (1D) vector data of a single-instruction multiple-data (SIMD) operation (see Fig. 2A, 2B; para. 0010 - multiple SIMD source vectors are loaded. source vectors is multiplied with respective convolution coefficient vectors; para. 0014 – SIMD instruction may be provided to perform a de-interlacing operation on computed data vectors; para. 0045 – vector instructions may be defined as single instruction-multipledata (SIMD) instructions in the classical sense, in that they may define the same operation to be performed on multiple data elements in parallel; para. 0047 - one-dimensional convolution operation; para. 0063 - weighted value is computed using a Multiply-and-Add vector instruction common to many SIMD architectures; para. 0085 - performing direct convolution operations using SIMD instructions available on general purpose CPU cores, convolutions may be performed that are as efficient or more efficient that possible using specialized spatial convolutional neural network hardware while fitting into a conventional processor pipeline and thus minimizing or eliminating extra hardware.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include wherein the first target region corresponds to one-dimensional (1D) vector data of a single-instruction multiple-data (SIMD) operation for the purpose of efficiently performing convolutions more directly, improving CNN operation, as taught by Diamond (0085). Claim(s) 8, 9, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Botimer, in view of Mathew, and further in view of Darvish Rouhani et al., US Publication 20200210840 (“Darvish”). Claim 8: Darvish further teaches or suggests wherein a number of available weight values is determined based on a bit precision of the weight kernel (see para. 0028 - NN weights and activation values can be represented in a lower-precision quantized format with an acceptable level of error introduced; para. 0039 - lowering the precision of a number can include reducing the range of values that can be used to represent an exponent of the number; para. 0087 - the computational cost of matrix-vector multiplication can be further reduced by reducing mantissa widths. 4-bit fixed point number can only represent values in the range [0001 2 , 1111 2 ]; para. 0140 - lower precision number format such as a two- or three-bit precision block floating-point format can be used early in training and the precision of numbers used to represent activation values, weights, and/or gradients for the neural network can be increased for successive training operations, as indicated by the performance metric.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include wherein a number of available weight values is determined based on a bit precision of the weight kernel for the purpose of efficiently adjusting parameters including weights, improving CNN fine tuning, as taught by Darvish (0087 and 0140). Claim 9: Darvish further teaches or suggests wherein a bit precision of the weight kernel is less than or equal to 3 bits (see para. 0028 - NN weights and activation values can be represented in a lower-precision quantized format with an acceptable level of error introduced; para. 0039 - lowering the precision of a number can include reducing the range of values that can be used to represent an exponent of the number; para. 0087 - the computational cost of matrix-vector multiplication can be further reduced by reducing mantissa widths. 4-bit fixed point number can only represent values in the range [0001 2 , 1111 2 ]; para. 0140 - lower precision number format such as a two- or three-bit precision block floating-point format can be used early in training and the precision of numbers used to represent activation values, weights, and/or gradients for the neural network can be increased for successive training operations, as indicated by the performance metric.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include wherein a bit precision of the weight kernel is less than or equal to 3 bits for the purpose of efficiently adjusting parameters including weights, improving CNN fine tuning, as taught by Darvish (0087 and 0140). Claim(s) 17: Claim(s) 17 correspond to Claim 9, and thus, Botimer, Mathew, and Darvish teach or suggest the limitations of claim(s) 17 as well. Claim(s) 10, 18, and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Botimer, in view of Mathew, and further in view of Ruff, US Publication 2021/0049463 (“Ruff”). Claim 10: As indicated above, Botimer teaches or suggests the intermediate operation result of the first input plane corresponds to a multiplication operation of the first input plane, and the generating of the base planes generating base planes comprising generating the base planes corresponding to the multiplication operation. Ruff further teaches or suggests through a shift operation and an add operation instead of directly performing the multiplication operation (see Fig. 1-11; para. 0007 - object of this invention to provide an alternative so that convolutional filter multiplications may be entirely replaced by simpler addition and bit shifting that are less costly in terms of silicon real estate and power consumption but critically without significant or indeed any loss of network accuracy and by so replacing these multiplications within the novel computational device then the present invention supplies a highly efficient novel computational device for deploying a complete CNN; para. 0009 - avoids any multiplication once the shared intermediate significand tensor)* [37] has been computed and even that may be computed it will be shown without multiplication; para. 0010 – instead uses the elementwise addition operator of linear algebra to combine the intermediate maps ) by indexing with v and *(p,q) positional shift and since the addition operator is very inexpensive to compute then the novel convolutional computational device presented offers huge processing cost and power consumption advantage over the current state of the art convolutional accelerator devices; para. 0030 - allows computation of the output O by a process of shifted-addition-and-accumulation one 2D slice at a time from the shared scaled significand tensor one such slice for each coefficient sequentially for a bank of convolutional filters W, and so by computing di once then the convolution result for each coefficient separately may be computed without multiplication and only using addition.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include a shift operation and an add operation instead of directly performing the multiplication operation for the purpose of efficiently performing convolution without multiplication, reducing convolution computational costs, as taught by Ruff (0007, 0009, and 0010). Claim(s) 18: Claim(s) 18 correspond to Claim 10, and thus, Botimer, Mathew, and Ruff teach or suggest the limitations of claim(s) 18 as well. Claim 24: As indicated above, Botimer teaches or suggests the intermediate operation result of the first input plane corresponds to a multiplication operation of the first input plane, and the generating of the base planes generating base planes comprising generating the base planes corresponding to the multiplication operation. Ruff further teaches or suggests through a shift and add operation instead of directly performing the multiplication operation (see Fig. 1-11; para. 0007 - object of this invention to provide an alternative so that convolutional filter multiplications may be entirely replaced by simpler addition and bit shifting that are less costly in terms of silicon real estate and power consumption but critically without significant or indeed any loss of network accuracy and by so replacing these multiplications within the novel computational device then the present invention supplies a highly efficient novel computational device for deploying a complete CNN; para. 0009 - avoids any multiplication once the shared intermediate significand tensor)* [37] has been computed and even that may be computed it will be shown without multiplication; para. 0010 – instead uses the elementwise addition operator of linear algebra to combine the intermediate maps ) by indexing with v and *(p,q) positional shift and since the addition operator is very inexpensive to compute then the novel convolutional computational device presented offers huge processing cost and power consumption advantage over the current state of the art convolutional accelerator devices; para. 0030 - allows computation of the output O by a process of shifted-addition-and-accumulation one 2D slice at a time from the shared scaled significand tensor one such slice for each coefficient sequentially for a bank of convolutional filters W, and so by computing di once then the convolution result for each coefficient separately may be computed without multiplication and only using addition.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include a shift and add operation instead of directly performing the multiplication operation for the purpose of efficiently performing convolution without multiplication, reducing convolution computational costs, as taught by Ruff (0007, 0009, and 0010). Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Botimer, in view of Mathew, and further in view of Croxford. Claim 26: As indicated above, Botimer teaches or suggests generating base planes and for each value. Botimer does not explicitly disclose for each absolute value of the available weight values greater than one. Croxford teaches or suggests for each absolute value of the available weight values greater than one (see para. 0035 - multiplying and accumulating 320 will result in a low or negative input value 325 to the activation function 330. This results in an activation output value 335 of zero i.e. the neuron is not 'activated'; para. 0041 - number of ways to order a kernel set, however one way is to calculate an absolute sum of the weights of each channel in the kernel; para. 0042 - absolute sum of the weights of each portion of the kernel is calculated and then the kernels having a higher absolute sum, representing the kernels which have the most significant impact when processed, are placed higher in the ordering than those having a lower absolute sum; para. 0062 - method of ordering the kernel channels 520 is to calculate an absolute sum of the weights of each channel of the kernel A, B, C. For example, kernel channel A, based on the weights shown in FIG. 5, would have an absolute sum of 5, kernel channel B will have an absolute sum of 2, and kernel channel C will have an absolute sum of 4. smaller weight values may be ignored as described above; 0063 - kernel channels 520 are ordered according to their absolute sum of weights.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Botimer, to include for each absolute value of the available weight values greater than one for the purpose of efficiently determining order and significance in a CNN based on weight values, improving CNN operation, as taught by Croxford (0041 and 0062). Response to Arguments Applicant’s further arguments have been considered but are not persuasive because the arguments do not correspond to the rationales as used in the current rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew T McIntosh whose telephone number is (571)270-7790. The examiner can normally be reached M-Th 8:00am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW T MCINTOSH/Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Feb 19, 2021
Application Filed
Jun 07, 2024
Non-Final Rejection — §103
Aug 23, 2024
Interview Requested
Aug 23, 2024
Response Filed
Sep 09, 2024
Examiner Interview Summary
Sep 09, 2024
Applicant Interview (Telephonic)
Nov 20, 2024
Final Rejection — §103
Jan 22, 2025
Examiner Interview Summary
Jan 22, 2025
Applicant Interview (Telephonic)
Jan 27, 2025
Response after Non-Final Action
Feb 20, 2025
Request for Continued Examination
Feb 27, 2025
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602534
Method and System to Display Content from a PDF Document on a Small Screen
2y 5m to grant Granted Apr 14, 2026
Patent 12596757
NATIVE INTEGRATION OF ARBITRARY DATA SOURCES
2y 5m to grant Granted Apr 07, 2026
Patent 12572617
SYSTEM AND METHOD FOR THE GENERATION AND EDITING OF TEXT CONTENT IN WEBSITE BUILDING SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12561191
TRAINING METHOD AND APPARATUS FOR FAULT RECOGNITION MODEL, FAULT RECOGNITION METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12547874
DEPLOYING PARALLELIZABLE DEEP LEARNING MODELS BY ADAPTING TO THE COMPUTING DEVICES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
95%
With Interview (+18.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 511 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month