Prosecution Insights
Last updated: April 19, 2026
Application No. 18/616,772

INPUT AND OUTPUT SPATIAL CROPPING OPERATIONS IN NEURAL PROCESSOR CIRCUITS

Non-Final OA §103
Filed
Mar 26, 2024
Examiner
PEYTON, TAMMARA R
Art Unit
2184
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
864 granted / 952 resolved
+35.8% vs TC avg
Moderate +6% lift
Without
With
+6.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
20 currently pending
Career history
972
Total Applications
across all art units

Statute-Specific Performance

§101
6.5%
-33.5% vs TC avg
§103
63.2%
+23.2% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
5.3%
-34.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 952 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 6, 9, 10, 16,17, and 18, is/are rejected under 35 U.S.C. 103 as being unpatentable over to Barnard et al., (EP3480745) and Kuo et al., (US 2019/0220742). It has been noted that, a claimed invention is unpatentable if the differences between it and the prior art are "such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art." 35 U.S.C. § 103(a) (2000); KSRInt'lr. Teleflex Inc., 127 S.Ct. 1727, 1734 (2007); Graham v.John Deere Co., 383 U.S. 1, 13-14 (1966). In Graham, the Court held that that the obviousness analysis is bottomed on several basic factual inquiries: "[(1)] the scope and content of the prior art are to be determined; [(2)] differences between the prior art and the claims at issue are to be ascertained; and [(3)] the level of ordinary skill in the pertinent art resolved." 383 U.S. at 17. See also KSR, 127 S.Ct. at 1734. "The combination of familiar elements according to known methods is likely to be obvious when it does no more; than yield predictable results." KSR, at 1739. "When a work is available in one field of endeavor, design incentives and other market forces can prompt variations of it, either in the same field or in a different one. If a person of ordinary skill in the art can implement a predictable variation, § 103 likely bars its patentability." Id. at 1740. "For the same reason, if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill." Id. "Under the correct analysis, any need or problem known in the field of endeavor at the time of invention and addressed by the patent can provide a reason for combining the elements in the manner claimed." Id. 11742. As per claim 1 and 9, Barnard teaches a system-on-a-chip circuit, comprising: a neural processor circuit (Barnard,[0023-0027],see also fig.3,"The hardware implementation 300 comprises a plurality of convolution engines 302 [a plurality of neural engine circuits]), comprising: a data processor circuit, configured to perform multiple modes of spatial cropping (Barnard, [0052-0065] teaches reducing a spatial size based on a version of the type of first or second input. Specifically, note fig. 8, "The normalisation module 810 is configured to receive one of the following as input data: the accumulation output (via the element-wise operations module 806) (e.g. when a convolution layer is processed in the current hardware pass and neither an element-wise layer nor an activation layer is processed in the current hardware pass); and a store partial results related to 128 different filters." ); wherein: in the generate one or more task descriptors, a task descriptor of the one or more task descriptors indicating the mode of spatial crop and the determined input or output crop offsets; (Barnard [0052-0065] teaches that [t]he pooling module 812 may receive the normalised data from the normalisation module 810 [t]he pooling module 812 is configured to perform a pooling function, such as, but not limited to, a max or mean function, on the received data to produce pooled data. The purpose of a pooling layer is to reduce the spatial size of the representation [in the pooling mode, the neural processor engine]); Barnard does not expressly teach “using the one or more task descriptors, the data processor circuit and the direct memory access circuit to perform the spatial crop on the data according to the determined input or output crop offsets.” However Kuo teaches: a planar (DMA) engine circuit coupled to the plurality of neural engine circuits and configured to operate in one or more tasks in parallel with the plurality of neural engine circuits (Kuo, [0031], see also fig. 2, "A block (e.g., block 211) is a basic unit of computation. For example, an engine (e.g., the convolution engine 111) may include an array of multiply and-accumulate (MAC) circuits, and the size of a block may be equal to the size of the MAC array. Thus, types of task descriptor operations on a block can be performed in parallel within a neural engine circuits engine. Therein, the input or output crop offsets is based on the size of an input tile determined by the size of the needed buffer (e.g., the convolution buffer 151). For example, an entire input tile should fit into the convolution buffer 151."), the neural engine circuit is therefore operable in one of two or more modes that include a pooling mode and an elementwise mode configured to generate a second output (Kuo, [0024], see also fig. 1 "The DLA 100 includes multiple engines, each of which performs one type of neural network operations. Each engine includes hardware circuits (e.g., multipliers, adders, accumulators, etc.) for performing mathematical computations. In this example, the DLA 100 includes a convolution engine 111 for performing convolution operations, an activation engine 112 for performing element-wise mathematical operations (e.g., rectification (ReLU), batch normalization (BN), etc.), a pooling engine 113 for performing downsampling operations] the planar engine circuit operable in one of two or more modes that include a pooling mode and an elementwise mode configured to generate a second output], and a mathematical function engine 114 (e.g., for computing trigonometry functions, max/min functions, absolute values, etc.)."). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Barnard with the teachings of Kuo the motivation to do so would be to have a hardware accelerator that is able to rely on relevant task descriptor operations used for input to fit in buffers for fast operational access (Kuo, [0018-0021], "[I]nput data to the DLA [deep learning accelerator] is retrieved from a system memory external to the DLA, and stored in a buffer memory internal to the DLA. Due to the limited buffer size, only a fraction of the input data can be stored in the buffer memory at any given point of time. Thus, the input data may be partitioned into multiple tiles, and the buffer memory may store one or more tiles at the same time [t]he DLA includes multiple different engines performing different types of neural network computations. Each engine processes the input feature map on a tile-by-tile basis [t]hus, the engines may process the tiles in parallel, passing data from one engine to another via the buffer memory to reduce system memory access."). As per claim 2, Barnard -Kuo teaches the read circuit configured to fetch input data from the system memory and store the input data into a buffer; and a write circuit, the write circuit configured to write data from the buffer to the system memory. (Kuo, [0018-0021], "[I]nput data to the DLA [deep learning accelerator] is retrieved from a system memory external to the DLA, and stored in a buffer memory internal to the DLA. Due to the limited buffer size, only a fraction of the input data can be stored in the buffer memory at any given point of time. Thus, the input data may be partitioned into multiple tiles, and the buffer memory may store one or more tiles at the same time [t]he DLA includes multiple different engines performing different types of neural network computations As per claim 6. Kuo teaches wherein the data processor circuit further comprises a realignment shifter circuit, the realignment shifter circuit configured to perform shifting of the data by a specified number of places in a given direction. (Kuo, para. 0031, see also fig. 2, "A block (e.g., block 211) is a basic unit of computation. For example, an engine (e.g., the convolution engine 111) may include an array of multiply and-accumulate (MAC) circuits, and the size of a block may be equal to the size of the MAC array. Thus, types of task descriptor operations on a block can be performed in parallel within a neural engine circuits engine. As per claims 10, 16, 17, and 18, see the rejection for the claims above. Allowable Subject Matter Claims 3-5, 7, 8, 11-15, 19, and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. RELEVENT ART CITED BY THE EXAMINER The following prior art made of record and relied upon is citied to establish the level of skill in the applicant's art and those arts considered reasonably pertinent to applicant's disclosure. See MPEP 707.05(c). 3. KIM ET AL. (US 2023/0281960) teaches image encoding/decoding using a feature map of an artificial neural network. (Abstract) Conclusion The examiner requests, in response to this office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 C.F.R.I .Hi(c). In amending in reply to a rejection of claims in an application or patent under reexamination, the applicant or patent owner must clearly point out the patentable novelty which he or she thinks the claims present in view the state of the art disclosed by the references cited or the objections made. The applicant or patent owner must also show how the amendments avoid such references or objections. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAMMARA R PEYTON whose telephone number is (571)272-4157. The examiner can normally be reached on 9am-5pm, EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henry Tsai can be reached on 571-272-4176. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAMMARA R PEYTON/Primary Examiner, Art Unit 2184 February 21, 2026
Read full office action

Prosecution Timeline

Mar 26, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602236
Device Customization While Remaining In An Integral Outer Package Using NFC or RFID To Update Or Upgrade Firmware Prior To Initial Power-UP
2y 5m to grant Granted Apr 14, 2026
Patent 12596672
SYSTEM MANAGEMENT SOFTWARE WITH SYSTEM VITAL PRODUCT DATA (SVPD) FOR COMPARING FIRST AND SECOND SERVER IDENTITY DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596624
SYSTEM POWER MANAGEMENT OF DEVICES COUPLED TO A PORT OF AN INFORMATION HANDLING SYSTEM BY IDENIFYING ASSOCIATED CONTEXTUAL DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12591439
MANAGING A CONTAINERIZED SERVICE USING A SYSTEM MANAGER AND A DEPLOYMENT ENGINE INDICATING AN OPERATIONAL STATUS OF THE ONE OR MORE CONTAINERS
2y 5m to grant Granted Mar 31, 2026
Patent 12585475
AUTOMATED BOOT IMAGE CONFIGURATION AND BOOTING VIA A BASEBOARD MANAGEMENT CONTROLLER IN RESPONSE TO AN UNSOLICITED BOOT IMAGE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
97%
With Interview (+6.1%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 952 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month