Prosecution Insights
Last updated: April 19, 2026
Application No. 18/212,347

MAPPING NEURAL NETWORKS TO HARDWARE

Non-Final OA §101§103
Filed
Jun 21, 2023
Examiner
ABRISHAMKAR, KAVEH
Art Unit
2494
Tech Center
2400 — Computer Networks
Assignee
Imagination Technologies Limited
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
95%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
797 granted / 1020 resolved
+20.1% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
27 currently pending
Career history
1047
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
39.7%
-0.3% vs TC avg
§102
22.4%
-17.6% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1020 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This action is in response to the communication filed on June 21, 2023. Claims 1-14 were originally received for consideration. No preliminary amendments for the claims have been received. 2. Claims 1-14 are currently pending consideration. Information Disclosure Statement 3. Initialed and dated copies of Applicant’s IDS (form 1449), received on 2/21/2024 and 3/01/2024, are attached to this Office Action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 4. Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 The claim(s) recite(s) a method which is one of the four statutory categories of invention (e.g. a process). The method involves mapping a neural network to hardware using a binary tree by determining a starting value of a current depth within a binary tree, arranging the coefficients into groups, calculating a compressed size of at least one of the coefficient groups, determining whether termination criteria are satisfied, and either updating the depth and repeating the steps or outputting data defining each of the hardware passes. Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art (see MPEP 2111). The claim does not provide any details about how the actually mapping of the neural network is performed, but merely generally defines the mapping in the preamble and proceeds to define the binary tree in the body of the claim. The broadest reasonable interpretation of the mapping and the steps of determining, arranging, and outputting fall within the mental process grouping of abstract ideas because it covers concepts which can be performed in the human mind including determining, arranging, calculating and outputting (see MPEP 2106.04(a)(2) subsection III). The claim under its broadest reasonable interpretation recites a mental process and it is directed to an abstract idea of mapping data using a binary tree. There is no recitation of additional elements in claim 1 which integrate the judicial exception into a practical application. Outputting data is insignificant extra-solution activity (see MPEP 2106.04(d)). Claim 1 does not recite any additional elements that impose a meaningful limit on the judicial exception. The claim does not recite any elements which provide an inventive concept. The other elements of the claim which are contained in the preamble state storing coefficients in memory and reading a subset of coefficients from an external memory. These are simply generically recited computing elements for implementing the abstract idea. The claim recites outputting data but this is insignificant post-solution activity. The claims do not recite any additional elements which individually or in combination amount to significantly more. Furthermore, as argued above with respect to step 2A, using generic computer functions to execute an abstract idea does not add significantly more. The claim is not patent eligible. The dependent claims dependent on claim 1 do not recite any additional steps which individually or in combination with the inherited limitations of claim 1 amount to significantly more. Claim 2 recites the determination of a starting value of a current depth within the binary tree. There are no additional elements recited in claim 3 that individually or in combination with other elements to create a practical application or significantly more than the abstract idea. Claim 3 recites the determination of a starting value of a current depth within the binary tree by setting the starting value of the current depth to a maximum depth of the binary tree. This is merely setting a data point. There are no additional elements recited in claim 3 that individually or in combination with other elements to create a practical application or significantly more than the abstract idea. Claim 4 defines the maximum depth of the binary tree. There are no additional elements recited in claim 4 that individually or in combination with other elements to create a practical application or significantly more than the abstract idea. Claim 5 recites the minimum group size definition. There are no additional elements recited in claim 5 that individually or in combination with other elements to create a practical application or significantly more than the abstract idea. Claim 6 further defines how to determine the starting depth value by compressing and dividing the coefficients. This is merely a calculation which can be done in the human mind or using a pen and paper. There are no additional elements recited in claim 6 that individually or in combination with other elements to create a practical application or significantly more than the abstract idea. Claim 7 discloses the outputting of the data. There are no additional elements recited in claim 7 that individually or in combination with other elements to create a practical application or significantly more than the abstract idea. Claim 8 discloses how the calculation of the compressed size of coefficients is performed. These are again merely calculations that can be performed in the human mind or using a pen and paper. There are no additional elements recited in claim 8 that individually or in combination with other elements to create a practical application or significantly more than the abstract idea. Claim 9 discloses when the data is output by comparing the compressed size of the selected groups and merging the groups if the compressed size satisfies the hardware size constraint. There is no disclosure on how the hardware size constraint is retrieved. There are no additional elements recited in claim 9 that individually or in combination with other elements to create a practical application or significantly more than the abstract idea. Claim 10 discloses update the current depth and defines the termination criteria. There are no additional elements recited in claim 10 that individually or in combination with other elements to create a practical application or significantly more than the abstract idea. Claim 11 discloses decreasing the current depth. There are no additional elements recited in claim 11 that individually or in combination with other elements to create a practical application or significantly more than the abstract idea. Claim 12 discloses defining the hardware size constraint as a size of a buffer. This is merely a generic computer component and does not add significantly more than the abstract idea. Claim 13 is analogous to claim 1 and does not recite eligible subject matter for the reasons provided in the rejection of claim 1. Claim 14 is analogous to claim 1 and does not recite eligible subject matter for the reasons provided in the rejection of claim 1. Claim 14 discloses a processor and a memory but both are generic computing components and do not rescue the claim from being directed towards an abstract idea. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (U.S. Patent Pub. No. US 2020/0026992) in view of An et al. (U.S. Patent Pub. No. US 2017/0272750). Regarding claim 1, Zhang discloses: A method of mapping a neural network to hardware (paragraphs 0235-0239:: Mapping a neural network into hardware based on a splitting algorithm wherein compression is also provided) comprising using a binary tree to assess how to split a layer of the neural network into a plurality of hardware passes, each hardware pass reading a subset of coefficients of the layer from external memory and wherein the coefficients are stored in compressed form in the memory and each node in the binary tree corresponding to a different subset of the coefficients, wherein using the binary tree comprises. Zheng does not disclose the use of a binary tree and specifically the steps of (i) determining a starting value of a current depth within the binary tree; (ii) arranging the set of coefficients into groups, each group corresponding to a node at the current depth; (iii) calculating a compressed size of at least one group of coefficients at the current depth; (iv) determining whether termination criteria are satisfied, at least one of the termination criteria being based on a comparison between the calculated compressed size and a hardware size constraint; (v) in response to determining that the termination criteria are not satisfied, updating the current depth and repeating steps (ii)-(iv); and (vi) in response to determining that termination criteria are satisfied, outputting data defining each of the plurality of hardware passes, wherein the data is dependent upon the current depth. Zhang discloses splitting the neural network layer into smaller units (paragraph 0014) based on hardware constraints (paragraph 0181-0186), performing network compression (paragraphs 0036, 0047) and outputs a parameter file for the hardware neural network (paragraph 0139). However, Zhang does not use a binary tree. In an analogous art, An discloses a binary tree partitioning process which recursively generates binary tree leave nodes until a termination condition is met (see Abstract, paragraphs 0028-0034). It would have been obvious to one of ordinary skill in the art to use a binary tree in the system of Zhang in order to improve the efficiency of the neural network (An: paragraph 0010). Claim 2 is rejected as applied above in rejecting claim 1. Furthermore, An discloses: The method according to claim 1, wherein determining a starting value of a current depth within the binary tree comprises setting the starting value of the current depth to a depth of one and wherein updating the current depth comprises increasing the current depth (paragraph 0015: the depth associated with the tree dictates the termination condition). Claim 3 is rejected as applied above in rejecting claim 1. Furthermore, An discloses: The method according to claim 1, wherein determining a starting value of a current depth within the binary tree comprises setting the starting value of the current depth to a maximum depth of the binary tree and wherein updating the current depth comprises decreasing the current depth (paragraph 0015: the depth associated with the tree dictates the termination condition). Claim 4 is rejected as applied above in rejecting claim 3. Furthermore, An discloses: The method according to claim 3, wherein the maximum depth of the binary tree is defined by a minimum group size (paragraph 0015: the depth associated with the tree dictates the termination condition). Claim 5 is rejected as applied above in rejecting claim 4. Furthermore, Zhang discloses: The method according to claim 4, wherein the minimum group size is defined by a compression method used to compress the coefficients for storage in the external memory (paragraphs 0036, 0047: compression operation). Claim 6 is rejected as applied above in rejecting claim 1. Furthermore, Zhang discloses: The method according to claim 1, wherein determining a starting value of a current depth within the binary tree comprises: compressing all the coefficients of the layer to determine a compressed size of the layer (paragraphs 0036, 0047: compression operation); and dividing the compressed size of the layer by the hardware size constraint (paragraphs 0036, 0047: compression operation). Claim 7 is rejected as applied above in rejecting claim 1. Furthermore, Zhang discloses: The method according to claim 1, wherein outputting data defining each of the plurality of hardware passes comprises: determining a number of coefficients in a group at the current depth (paragraph 0036: weighting coefficients); and increasing the number of coefficients in at least one group (paragraph 0036: weighting coefficients); calculating a compressed size of the at least one group (paragraph 0036: weighting coefficients); and in response to determining that the compressed size satisfies the hardware size constraint, outputting data defining each of the plurality of hardware passes based on the increased number of coefficients in the at least one group (paragraph 0036: weighting coefficients). Claim 8 is rejected as applied above in rejecting claim 1. Furthermore, An discloses: The method according to claim 1, wherein calculating a compressed size of at least one group of coefficients at the current depth comprises: calculating a compressed size of one group of coefficients at the current depth, the group of coefficients corresponding to a branch of the binary tree; and wherein in response to determining that termination criteria are satisfied, the method further comprises, prior to outputting data: repeating steps (ii)-(v) for other groups in the branch of the binary tree before repeating steps (ii)-(v) for other groups at the starting value of the current depth (see Abstract, paragraphs 0028-0034). Claim 9 is rejected as applied above in rejecting claim 1. Furthermore, Zhang discloses: The method according to claim 1, wherein outputting data defining each of the plurality of hardware passes comprises: selecting two or more groups at the current depth (paragraph 0065: the nodes in the neural network connection are merged); comparing a combined compressed size of the selected groups to the hardware constraint (paragraph 0065: the nodes in the neural network connection are merged); in response to determining that combined compressed size satisfies the hardware size constraint, merging the groups and outputting data defining each of the plurality of hardware passes based on the merged groups (paragraph 0065: the nodes in the neural network connection are merged). Claim 10 is rejected as applied above in rejecting claim 1. Furthermore, Zhang discloses: The method according to claim 1, wherein updating the current depth comprises increasing the current depth and wherein the termination criteria comprise: the compressed group size does not exceed the hardware size constraint (paragraph 0038: binary tree block complies with a size constraint); and the current depth is greater than the starting depth plus one (paragraph 0038: binary tree block complies with a size constraint). Claim 11 is rejected as applied above in rejecting claim 1. Furthermore, An discloses: The method according to claim 1, wherein updating the current depth comprises decreasing the current depth and wherein the termination criteria comprise: the compressed group size exceeds the hardware size constraint (paragraph 0038: binary tree block complies with a size constraint); and the current depth is less than the starting depth minus one (paragraph 0038: binary tree block complies with a size constraint). Claim 12 is rejected as applied above in rejecting claim 1. Furthermore, An discloses: The method according to claim 1, wherein the hardware size constraint comprises a size of a buffer configured to store the coefficients or a bandwidth of a connection to the external memory (paragraph 0038: binary tree block complies with a size constraint). Regarding claim 13, Zhang discloses: A non-transitory computer readable storage medium having stored thereon computer readable code configured to cause a method of mapping a neural network to hardware to be performed when the code is run, the method of mapping a neural network to hardware (paragraphs 0235-0239:: Mapping a neural network into hardware based on a splitting algorithm wherein compression is also provided) comprising using a binary tree to assess how to split a layer of the neural network into a plurality of hardware passes, each hardware pass reading a subset of the coefficients of the layer from external memory and wherein the coefficients are stored in compressed form in the memory and each node in the binary tree corresponding to a different subset of the coefficients, wherein using the binary tree comprises. Zheng does not disclose the use of a binary tree and specifically the steps of (i) determining a starting value of a current depth within the binary tree; (ii) arranging the set of coefficients into groups, each group corresponding to a node at the current depth; (iii) calculating a compressed size of at least one group of coefficients at the current depth; (iv) determining whether termination criteria are satisfied, at least one of the termination criteria being based on a comparison between the calculated compressed size and a hardware size constraint; (v) in response to determining that the termination criteria are not satisfied, updating the current depth and repeating steps (ii)-(iv); and (vi) in response to determining that termination criteria are satisfied, outputting data defining each of the plurality of hardware passes, wherein the data is dependent upon the current depth. Zhang discloses splitting the neural network layer into smaller units (paragraph 0014) based on hardware constraints (paragraph 0181-0186), performing network compression (paragraphs 0036, 0047) and outputs a parameter file for the hardware neural network (paragraph 0139). However, Zhang does not use a binary tree. In an analogous art, An discloses a binary tree partitioning process which recursively generates binary tree leave nodes until a termination condition is met (see Abstract, paragraphs 0028-0034). It would have been obvious to one of ordinary skill in the art to use a binary tree in the system of Zhang in order to improve the efficiency of the neural network (An: paragraph 0010). Regarding claim 14, Zhang discloses: A computing device comprising: a processor (paragraph 0046: processor); and memory arranged to store computer readable code configured to cause a method of mapping a neural network to hardware to be performed when the code is executed by the processor, the method of mapping a neural network to hardware (paragraphs 0235-0239:: Mapping a neural network into hardware based on a splitting algorithm wherein compression is also provided) comprising using a binary tree to assess how to split a layer of the neural network into a plurality of hardware passes, each hardware pass reading a subset of the coefficients of the layer from external memory and wherein the coefficients are stored in compressed form in the memory and each node in the binary tree corresponding to a different subset of the coefficients, wherein using the binary tree comprises. Zheng does not disclose the use of a binary tree and specifically the steps of (i) determining a starting value of a current depth within the binary tree; (ii) arranging the set of coefficients into groups, each group corresponding to a node at the current depth; (iii) calculating a compressed size of at least one group of coefficients at the current depth; (iv) determining whether termination criteria are satisfied, at least one of the termination criteria being based on a comparison between the calculated compressed size and a hardware size constraint; (v) in response to determining that the termination criteria are not satisfied, updating the current depth and repeating steps (ii)-(iv); and (vi) in response to determining that termination criteria are satisfied, outputting data defining each of the plurality of hardware passes, wherein the data is dependent upon the current depth. Zhang discloses splitting the neural network layer into smaller units (paragraph 0014) based on hardware constraints (paragraph 0181-0186), performing network compression (paragraphs 0036, 0047) and outputs a parameter file for the hardware neural network (paragraph 0139). However, Zhang does not use a binary tree. In an analogous art, An discloses a binary tree partitioning process which recursively generates binary tree leave nodes until a termination condition is met (see Abstract, paragraphs 0028-0034). It would have been obvious to one of ordinary skill in the art to use a binary tree in the system of Zhang in order to improve the efficiency of the neural network (An: paragraph 0010). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAVEH ABRISHAMKAR whose telephone number is (571)272-3786. The examiner can normally be reached M-F 9-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jung Kim can be reached at 571-272-3804. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KAVEH ABRISHAMKAR/ 01/23/2026Primary Examiner, Art Unit 2494
Read full office action

Prosecution Timeline

Jun 21, 2023
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598086
TOKENIZED INDUSTRIAL AUTOMATION SOFTWARE
2y 5m to grant Granted Apr 07, 2026
Patent 12598216
SMALL-FOOTPRINT ENDPOINT DATA LOSS PREVENTION
2y 5m to grant Granted Apr 07, 2026
Patent 12585761
SYSTEM AND METHOD FOR COMBINING CYBER-SECURITY THREAT DETECTIONS AND ADMINISTRATOR FEEDBACK
2y 5m to grant Granted Mar 24, 2026
Patent 12585771
LEARNED CONTROL FLOW MONITORING AND ENFORCEMENT OF UNOBSERVED TRANSITIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12579280
SYSTEMS AND METHODS FOR VULNERABILITY SCANNING OF DEPENDENCIES IN CONTAINERS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
95%
With Interview (+16.9%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 1020 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month