Prosecution Insights
Last updated: April 19, 2026
Application No. 17/653,095

DATA PROCESSING APPARATUS

Non-Final OA §103§112
Filed
Mar 01, 2022
Examiner
DE LA GARZA, CARLOS HEBERTO
Art Unit
2182
Tech Center
2100 — Computer Architecture & Software
Assignee
DENSO CORPORATION
OA Round
2 (Non-Final)
60%
Grant Probability
Moderate
2-3
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
6 granted / 10 resolved
+5.0% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
26 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
15.9%
-24.1% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
24.4%
-15.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Action is non-final and is in response to the claims filed 03/01/2022. Claims 1-14 are currently pending, of which claims 1-14 are currently rejected. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 and 8 recite the limitation "the other of the results" in page 21 line 10 and page 23 line 3. There is insufficient antecedent basis for this limitation in the claim. Claim 1 and 8 also recite the limitation "the data stored in the external memory" in page 21 line 5 and page 22 line 23. There is insufficient antecedent basis for this limitation in the claim. Claims 2-7 and 9-14 inherit the same deficiency as claim 1 by reason of dependence. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 7, 8, 11, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kfir et al. (U.S. Patent Application Publication No.: US 20190095776 A1), hereinafter “Kfir”, in view of Chen Zhang in NPL: Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks (https://dl.acm.org/doi/pdf/10.1145/2684746.2689060), hereinafter “Zhang”. Regarding Claim 1, Kfir teaches: A data processing apparatus comprising: … an input buffer unit (Fig. 1, e.g., Input buffer 22 and Fetch and shift 1 and 2 (input buffer unit)) … ; an M x M data processing unit that performs M x M convolution processing using the data stored in the input buffer unit (Fig. 1, e.g., Conv 0 28 at stage 26 (M x M data processing unit) receive data from Shift register 24 (input buffer unit); ¶0025, e.g., Processing elements perform convolution using input data with a respective kernel); an N x N data processing unit that performs N x N convolution processing using the data stored in the input buffer unit (Fig. 1, e.g., Elements 28-37 on stages 34, 36, 38 (N x N data processing unit) receive data from Fetch and Shift (input buffer unit); ¶0025, e.g., Processing elements perform convolution using input data with a respective kernel); a first output buffer unit that stores one of results of processing by the M x M data processing unit and the N x N data processing unit (Fig. 1, e.g., Int Buff1 0 in stage 30 (first output buffer unit) stores result from Conv0 in Stage 26 (M x M data processing unit)); and a second output buffer unit that stores the other of the results of processing by the M x M data processing unit and the N x N data processing unit (Fig. 1, e.g., Out Buff0 in stage 40 (second output buffer unit) stores result from Conv0 in Stage 34, Int Buff2 0 in stage 36, and Pool0 38 (N x N data processing unit)), wherein the result of processing stored in the first output buffer unit is stored in the input buffer unit (Fig. 1, e.g., Fetch and Shift 2 (input buffer unit) in stage 32 receives data from int buff1 0 in stage 30 (first output buffer unit)), and … Kfir does not teach: an external memory that stores processing target data; an input buffer unit that stores at least part of the data stored in the external memory; the result of processing stored in the second output buffer unit is transferred to the external memory. However, Zhang teaches: an external memory that stores processing target data (Fig. 4, e.g., External memory is shown; Page 163 Section 3.1 Design Overview, e.g., Data for processing (processing target data) is stored in external memory); an input buffer unit that stores at least part of the data stored in the external memory (Page 163 Section 3.1 Design Overview, e.g., Data from external memory is stored in on-chip buffers); Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to combine the DRAM external memory to provide input data to input buffers as taught by Zhang with the Convolutional Neural Network as taught by Kfir. Additionally, Kfir suggests the input buffer receiving input data such as pixels of a color image (See Kfir ¶0024). One would have been motivated to combine these references because both references disclose convolution operations for image processing using convolutional neural networks, and Zhang enhances the model of Kfir by using a DRAM to provide data to be processed to input buffers. Zhang also teaches: the result of processing stored in the second output buffer unit is transferred to the external memory (Page 168 Section 4.3 Memory Sub-System Last paragraph, e.g., Resulting output feature maps are written down to DRAM (external memory). Results are stored in output buffer 1; Page 167 Section 4.1 System Overview, e.g., DRAM is used for external storage (external memory)). Therefore, it would have also been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to combine the ping-pong operation using double buffers to store results in DRAM as taught by Zhang with the Int Buff1 0 in stage 30 and Out Buff 0 in stage 40 (first and second output buffer units) as taught by Kfir. Additionally, Kfir suggests the output data is sent to the next processing stage by an output sender unit (See Kfir ¶0027). One would have been motivated to combine these references because both references disclose convolution operations for image processing using convolutional neural networks, and Zhang enhances the model of Kfir by using a DRAM to provide data to be processed to input buffers. Regarding Claim 4, Kfir in view of Zhang teach: The data processing apparatus according to claim 1, wherein the processing target data is data defined by three or more orthogonal axes (Kfir: Fig. 3, e.g., inputs from Stage 22 (input buffer) consist of X and Y axis (first and second axis); ¶0034, e.g., A1, B1, and C1 represent the channel (third axis)), and the M x M convolution processing or the N x N convolution processing is performed on a first axis and a second axis in the processing target data (Kfir: ¶0032, e.g., Processing unit 28 (M x M convolution processing) performs convolution using 3x3x3 kernel, hence it has to use at least the first and second axis of input data (processing target data)). Regarding Claim 7, Kfir in view of Zhang teach: The data processing apparatus according to claim 1, wherein the N x N convolution processing and the M x M convolution processing are performed as part of image processing using a neural network (Kfir: ¶0010; ¶0013). With regards to Claims 8, 11, and 14, they are media versions of the claimed data processing apparatus above (claims 1, 4, and 7 respectively), wherein all claim limitations also have been addressed and/or covered in cited areas. Thus, accordingly, these claims are rejected for at least the same reasons therein. Claims 2, 3, 9, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Kfir in view of Zhang, further in view of Jason Brownlee in NPL: A Gentle Introduction to 1×1 Convolutions to Manage Model Complexity (https://machinelearningmastery.com/introduction-to-1x1-convolutions-to-reduce-the-complexity-of-convolutional-neural-networks/), hereinafter “Brownlee”. With regards to Claim 2, Kfir in view of Zhang teach: The data processing apparatus according to claim 1, wherein M and N are integers greater than or equal to 1 (¶0006, e.g., Each PE performs convolutions using respective kernels; ¶0032, e.g., 3x3 (integers greater than 1) Kernels are used, but other Kernel sizes may be used), … Kfir in view of Zhang do not explicitly teach: … and M > N (Downsample Feature Maps with 1x1 filters, e.g., 1x1 filters can be used at any point in a CNN to control the number of feature maps; Example code on section "Example of Decreasing Feature Maps" decreases the feature maps of the output of the 3x3 filter). Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to combine the 1x1 filter in one of the PEs as taught by Brownlee with the Conv1 28 (included in N x N data processing unit) as taught by Kfir. One would have been motivated to combine these references because both references disclose convolution operations for image processing using convolutional neural networks, and Brownlee enhances the model of Kfir in view of Zhang by making it possible to "control the number of feature maps." (Brownlee: Downsample Feature Maps with 1x1 filters) With regards to Claim 3, Kfir in view of Zhang in view of Brownlee teach: The data processing apparatus according to claim 2, wherein N = 1 (Kfir: ¶0006, e.g., Each PE performs convolutions using respective kernels; ¶0032, e.g., 3x3 (integers greater than 1) Kernels are used, but other Kernel sizes may be used; Brownlee: Downsample Feature Maps with 1x1 filters, e.g., 1x1 filters can be used at any point in a CNN to control the number of feature maps). With regards to Claims 9 and 10, they are media versions of the claimed data processing apparatus above (claims 2 and 3 respectively), wherein all claim limitations also have been addressed and/or covered in cited areas. Thus, accordingly, these claims are rejected for at least the same reasons therein. Claims 5, 6, 12, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kfir in view of Zhang, further in view of Arthur Stoutchinin in NPL: Optimally Scheduling CNN Convolutions for Efficient Memory Access (https://arxiv.org/pdf/1902.01492), hereinafter “Stoutchinin”. With regards to Claim 5, Kfir in view of Zhang teach: … the result of processing by the M x M data processing unit is stored in the [first] output buffer unit, and the result of processing by the N x N data processing unit is stored in the [second] output buffer unit (Fig. 1, e.g., Out Buff 31 in stage 40 (second output buffer unit) stores result from Conv 0, Int Buff2 0 and Pool 0 from stages 34, 36, 38 (N x N data processing unit), and Int Buff1 0 in stage 30 (first output buffer unit) stores result from Conv0 in stage 26 (M x M data processing unit)). Kfir in view of Zhang do not teach: The data processing apparatus according to claim 4, wherein if the number of data items belonging to a third axis in the data of result of the M x M convolution processing is smaller than the number of data items belonging to the third axis in the data of result of the N x N convolution processing, the result of processing by the M x M data processing unit is stored in the second output buffer unit, and the result of processing by the N x N data processing unit is stored in the first output buffer unit. However, in the same field of endeavor, Stoutchinin teaches having local buffering requirements for buffers implemented in a convolutional neural network. These local buffering requirements are dependent on tile sizes, which include channel size (third axis). Stoutchinin explains: “In the first step, we compute local buffering requirements for each array reference at different loop levels across an enumeration of different loop orders and loop tile sizes. This step is independent of a particular CNN layer shape because the local buffering requirements at any loopnest level depend only on loop order and tile sizes of different loops. In the second step, using these pre-enumerated buffer requirements, we analyze a particular CNN layer, exhaustively searching for a best combination of buffering levels for the three CNN arrays under different local buffer capacities.” (Stoutchinin: Page 6 section C. Dataflow schedule selection procedure) Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to the dataflow schedule selection procedure as taught by Stoutchinin with the convolutional neural network 20 as taught by Kfir. One would have been motivated to combine these references because both references disclose convolution operations for image processing using convolutional neural networks, and Stoutchinin enhances the model of Kfir in view of Zhang by allowing for better buffer allocation for tiles. Combination of the local buffering requirements depending on the tile size with the Int Buff1 0 in stage 30 (first output buffer unit) and the Out Buff 31 in stage 40 (second output buffer unit) would cause for the buffers to be switched depending on the tile size (result of convolution). Hence, Kfir in view of Zhang in view of Stoutchinin teach Claim 5 in its entirety. With regards to Claim 6, Kfir in view of Zhang teach: … the result of processing by the N x N data processing unit is stored in the second output buffer unit (Fig. 1, e.g., Out Buff 31 in stage 40 (second output buffer unit) stores result from Conv 0, Int Buff2 0 and Pool 0 from stages 34, 36, 38 (N x N data processing unit); ), and the result of processing by the M x M data processing unit is stored in the first output buffer unit (Fig. 1, e.g., Int Buff1 0 in stage 30 (first output buffer unit) stores result from Conv0 in stage 26 (M x M data processing unit)). Kfir in view of Zhang do not teach: The data processing apparatus according to claim 4, wherein if the number of data items belonging to a third axis in the data of result of the N x N convolution processing is smaller than the number of data items belonging to the third axis in the data of result of the M x M convolution processing, … However, in the same field of endeavor, Stoutchinin teaches having local buffering requirements for buffers implemented in a convolutional neural network. These local buffering requirements are dependent on tile sizes, which include channel size (third axis). Stoutchinin explains: “In the first step, we compute local buffering requirements for each array reference at different loop levels across an enumeration of different loop orders and loop tile sizes. This step is independent of a particular CNN layer shape because the local buffering requirements at any loopnest level depend only on loop order and tile sizes of different loops. In the second step, using these pre-enumerated buffer requirements, we analyze a particular CNN layer, exhaustively searching for a best combination of buffering levels for the three CNN arrays under different local buffer capacities.” (Stoutchinin: Page 6 section C. Dataflow schedule selection procedure) Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to the dataflow schedule selection procedure as taught by Stoutchinin with the convolutional neural network 20 as taught by Kfir. One would have been motivated to combine these references because both references disclose convolution operations for image processing using convolutional neural networks, and Stoutchinin enhances the model of Kfir in view of Zhang by allowing for better buffer allocation for tiles. Hence, Kfir in view of Zhang in view of Stoutchinin teach Claim 6 in its entirety. With regards to Claims 12 and 13, they are media versions of the claimed data processing apparatus above (claims 5 and 6 respectively), wherein all claim limitations also have been addressed and/or covered in cited areas. Thus, accordingly, these claims are rejected for at least the same reasons therein. Prior art made of Record NPL: Accelerating Deep Convolutional Neural Networks Using Specialized Hardware (Kalin Ovtcharov) – teaches a Top-Level architecture of a convolutional neural network accelerator, where each layer is computed using the same Processing units. This is achieved by inputting the output layer back into the Multi-Banked Input Buffer for processing of next layer. See Section “Accelerating Deep Convolutional Neural Networks in the Datacenter” and Figure 3. U.S. Patent No.: US 10346093 B1 (Wu et al.) – teaches a convolutional neural network architecture, where it includes N modules coupled in a pipeline. Layers in the neural network include Inception layers, which contain several filters including 3x3 and 1x1. See Fig. 4, Column 2 Lines 21-25 and Column 7 Lines 1-35. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARLOS H DE LA GARZA whose telephone number is (571)272-0474. The examiner can normally be reached Monday-Friday 9AM-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Caldwell can be reached at (571) 272-3702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.H.D./ Carlos H. De La GarzaExaminer, Art Unit 2182 (571)272-0474 /ANDREW CALDWELL/Supervisory Patent Examiner, Art Unit 2182
Read full office action

Prosecution Timeline

Mar 01, 2022
Application Filed
Jul 24, 2025
Non-Final Rejection — §103, §112
Sep 30, 2025
Interview Requested
Oct 08, 2025
Applicant Interview (Telephonic)
Oct 08, 2025
Examiner Interview Summary
Nov 11, 2025
Response Filed
Dec 12, 2025
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
60%
Grant Probability
99%
With Interview (+50.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month