Prosecution Insights
Last updated: April 19, 2026
Application No. 17/327,869

ALGORITHMIC METHOD IN MEMORY TRANSFER EFFICIENCY FOR IMPROVED PERFORMANCE OF NEURAL NETWORKS

Final Rejection §101§102
Filed
May 24, 2021
Examiner
HAN, JOSEP
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Texas Instruments Incorporated
OA Round
4 (Final)
38%
Grant Probability
At Risk
5-6
OA Rounds
3y 11m
To Grant
62%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
6 granted / 16 resolved
-17.5% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
33 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
33.4%
-6.6% vs TC avg
§103
37.8%
-2.2% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
9.9%
-30.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §102
Detailed Action The following action is in response to the communication(s) received on 01/09/2026. As of the claims filed 01/09/2026: Claims 1, 8, and 15 have been amended. Claims 1-20 are pending. Claims 1, 8, and 15 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments filed 01/09/2026 have been fully considered, but are not fully persuasive. With respect to the rejection under 35 USC § 101: (p.7) Applicant asserts that the amendments emphasize the technical improvements to computer memory management, such as the determined smoothed amount of memory detailing the memory change threshold which aids in the execution of a CNN for a given hardware resource. Examiner respectfully submits that the claims as currently recited do not recite the actions of the resulting abstract ideas which takes place in a technological environment. In other words, execution of the improved smoothed memory of the CNN layers is not recited in the claims. Thus, the broadest reasonable interpretation of the claims does not exclude the abstract ideas being performed solely in the human mind, and the claims remain reciting abstract ideas without significantly more. (p.9) Applicant further asserts that the dependent claims are eligible by virtue of dependency to claim 1. Examiner respectfully submits that claim 1 remains ineligible, in view of the responses above, and thus the dependent claims remain ineligible by virtue of dependency. Claims 8-20 also remain ineligible by virtue of substantial similarity to claim 1-7. Thus, the claims remain directed to an abstract idea without significantly more. On p.8 of the response, with respect to the rejection under 35 USC § 102: (¶2-3) Applicant asserts that Lym does not teach the moving window function applied to the amount of memory for the particular layer and the respective amount of memory of each of a determined number of adjacent layers of the machine learning network. Examiner respectfully disagrees. Processing in each sub-batch iterations (Lym [p.3-4]) requires a movement in each layer window, which corresponds to determining a smoothed amount of memory based on a moving window function. (¶3) Applicant further asserts that Lym does not identify transitions between adjacent layers including comparing the smoothed amount of memory to a dynamically determined memory change threshold amount that is based on the available memory of the device. This is unpersuasive, as Lym [p.7 left ¶3] teaches that the off-chip DRAM corresponds to the available memory of the device; rearranging based on the per-layer memory footprints corresponds to dynamically determining the memory change threshold amount. (¶4) Applicant further asserts that Lym does not use the smoothed amount of memory of each layer for the grouping, because no smoothed amount of memory of any layer is determined prior to the grouping. Examiner respectfully submits that the sub-batch iterations (Lym [p.4 left ¶2]) correspond to the identified transitions; merging iteratively (Lym [p.3 2nd col. Last ¶]) involves reducing the sub-batch size with the adjacent groups, which corresponds to the grouping of the multiple layers based on the identified transitions. (¶5) Applicant further asserts that Lym does not teach reducing the memory transfers between the memory hierarchy of the device. Examiner respectfully disagrees, as Lym teaches reducing the memory transfers ([abstract] “avoid traffic resulting from large per-layer memory footprints”) between different levels of memory hierarchy of the device (Lym [fig.1] off-chip memory, on-chip memory corresponds to the different hierarchies of memory for the device). Thus, the claims remain anticipated by Lym. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites a method, thus a process, one of the four statutory categories of patentable subject matter (Step 1). However, Claim 1 further recites: determining… an amount of memory… to process each layer of multiple layers of a machine learning network; which is an evaluation or judgement that can be performed in the human mind; determining…a smoothed amount of memory… for each of the multiple layers based on a moving window function applied to the amount of memory for that layer and the respective amount of memory of each of a determined number of adjacent layers of the machine learning network; which is an evaluation or judgement that can be performed in the human mind; identifying… transitions between adjacent layers, including comparing the smoothed amount of memory to a dynamically determined memory change threshold amount that is based on available memory of the device; which is an evaluation or judgement that can be performed in the human mind; grouping… the multiple layers of the machine learning network into a first layer grouping based on the identified transitions to reduce memory transfer overhead between different levels of a memory hierarchy of the device, which is an evaluation or judgement that can be performed in the human mind; Thus, the claim recites an abstract idea under Step 2A Prong 1. Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of: By the processing circuitry of a device… ; on the device…, as the performance of an abstract idea on a computer is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application. outputting…the first layer grouping configured to cause a respective set of layers of a group of the first layer grouping to complete prior to execution of a subsequent group of the first layer grouping, which is merely an insignificant extra-solution activity of data output, which by MPEP 2106.05(g) cannot integrate an abstract idea into a practical application. Thus, the claim is directed towards an abstract idea. Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)), and the activity of data output (MPEP 2106.05(g)) cannot provide significantly more, as receiving or transmitting data over a network is well understood, routine, and conventional (MPEP 2106.05(d)(II)(i)) and the combination of additional elements does not provide an inventive concept. Thus, the claim is ineligible. Claim 2, dependent on Claim 1, further recites: Modeling the machine learning network based on the first layer grouping; associating a first cost with the first layer grouping (mental process); generating a second layer grouping by adjusting a group boundary of the first layer grouping (mental process); and modeling the machine learning network based on the second layer grouping; associating a second cost with the second layer grouping (mental process); As these all fail the mental process grouping of abstract ideas, Claim 2 thus recites an abstract idea. The claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional element consists of outputting a lower cost layer grouping based on a comparison between the first cost and the second cost, which is insignificant extra-solution activity of data output, which by MPEP 2106.05(g) cannot integrate an abstract idea into a practical application. Thus, the claim is directed towards the abstract idea. Further, the additional element does not provide significantly more than the abstract idea itself, because the activity of data output (MPEP 2106.05(g)) cannot provide significantly more than the abstract idea itself. Thus, the claim is subject matter ineligible. Claim 3, dependent on claim 2, further recites a mental process of expecting (the first and second costs are based on at least one of expected number of memory accesses or processing cycles). It does not recite any new additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself. Claim 4, dependent on claim 2, further recites a mental process of adjusting (the group boundary is adjusted within a predefined range of values around the group boundary). It does not recite any new additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself. Claim 5 and 6, dependent on claim 1, merely recites details on the mental process of the grouping of the first layer (the first layer grouping comprises a first set of layers and a second set of layers; a first number of layers of the first set of layers differs from a second number of layers of the second set of layers). Neither claim recites any new additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself. Claim 7, dependent on claim 1, further recites: determining a minimum number of tiles for the layers of the first layer grouping based on the amount of memory used by the layers; (a mental process that can be done on pen and paper) determining a number of tiles for a last layer of the first layer grouping based on the minimum number of tiles; (a mental process that can be done on pen and paper) and determining the number of tiles for other layers of the first layer grouping based on the number of tiles for the last layer. (a mental process that can be done on pen and paper). As it does not recite any additional elements, it recites an abstract idea. Thus, the claim is subject matter ineligible. Claims 8-14 recite non-transitory computer readable storage medium storing instructions for performing precisely the methods of Claims 1-7, respectively. As performance on a computer cannot integrate an abstract idea into a practical application nor provide significantly more than the abstract idea itself (MPEP 2106.05(f)), Claims 8-14 are rejected as subject-matter ineligible for reasons set forth in the rejections of Claims 1-7, respectively. Claim 15 recites a… device, thus an article of manufacture, one of the four statutory categories of patentable subject matter (Step 1). However, Claim 15 further recites: determine a smoothed amount of memory of the memory device for each of the multiple layers based on the amount of memory for that layer and the respective amount of memory of each of a set number of adjacent layers; which is an evaluation or judgement that can be performed in the human mind; identify transitions between adjacent layers, in which, for each transition, the smoothed amount of memory differs from one layer of the adjacent layers of the transition to the other layer of the adjacent layers of the transition by more than a memory change threshold amount; which is an evaluation or judgement that can be performed in the human mind; and group the multiple layers of the machine learning network into a first layer grouping based on the identified transitions, which is an evaluation or judgement that can be performed in the human mind. Thus, the claim recites an abstract idea under Step 2A Prong 1. Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of: A device, comprising: a memory device; and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute non-transitory instructions causing the one or more processors to:, as the performance of an abstract idea on a computer is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application; output the first layer grouping configured to cause groups of the first layer grouping to execute in sequence, which is merely an insignificant extra-solution activity of data output, which by MPEP 2106.05(g) cannot integrate an abstract idea into a practical application. Thus, the claim is directed towards an abstract idea. Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)), and the activity of data output (MPEP 2106.05(g)) cannot provide significantly more, as receiving or transmitting data over a network is well understood, routine, and conventional (MPEP 2106.05(d)(II)(i)) and the combination of additional elements does not provide an inventive concept. Thus, the claim is ineligible. Claims 16-20 recite a computer system comprising: one or more processors, thus an article of manufacture, one of the four statutory categories of patentable subject matter. However, Claims 16-20 further recite that this computer system comprises instructions for performing precisely the methods of Claims 2-5 and 7, respectively. As performance on a computer cannot integrate an abstract idea into a practical application nor provide significantly more than the abstract idea itself (MPEP 2106.05(f)), Claims 15-20 are rejected as subject-matter ineligible for reasons set forth in the rejections of Claims 2-5 and 7, respectively. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sangkug Lym et al: “Mini-batch serialization: CNN Training with Inter-layer Data Reuse” (hereinafter Lym). Regarding claim 1, Lym teaches: A method comprising: determining, by a processing circuitry of a device, (Lym [p.10 2nd col 2nd ¶] MBS reconfigures the CNN computation graph by partitioning a mini-batch of samples into sub-batches whose memory footprint fits within on-chip storage.) (Note: CNN computation corresponds to the use of processing circuitry) an amount of memory on the device to process each layer of multiple layers of a machine learning network; (Lym [p.3 2nd col. Last ¶] The MBS algorithm forms initial layer groups by grouping adjacent layers that require the same number of sub-batch iterations. This is shown in Fig. 4 where grey vertical bars represent the data volume required for the inter-layer data per layer (or one multi-branch module block) of ResNet50, and the red line represents the resulting minimal sub-batch iteration count for each layer. [p.4 1st col 2nd ¶] The mini-batch is then processed in several sub-batch iterations (d mini−batch size sub−batch size e) within each group as shown in Fig. 5, which emphasizes how locality is increased and memory traffic reduced across features and weights.) (Note: the grey bars correspond to the determined amount of memory used) determining, by the processing circuitry, a smoothed amount of memory on the device for each of the multiple layers based on a moving window function applied to the amount of memory for that layer and the respective amount of memory of each of a determined number of adjacent layers of the machine learning network (Lym [p.3 2nd col. Last ¶] The MBS algorithm forms initial layer groups by grouping adjacent layers that require the same number of sub-batch iterations. This is shown in Fig. 4 where grey vertical bars represent the data volume required for the inter-layer data per layer (or one multi-branch module block) of ResNet50, and the red line represents the resulting minimal sub-batch iteration count for each layer. PNG media_image1.png 233 401 media_image1.png Greyscale [p.4 left ¶2] The mini-batch is then processed in several sub-batch iterations ( PNG media_image2.png 55 250 media_image2.png Greyscale ) within each group as shown in Fig. 5, which emphasizes how locality is increased and memory traffic reduced across features and weights. PNG media_image3.png 633 753 media_image3.png Greyscale ) (Note: The blue line corresponds to the smoothed amount of memory; the grey bars correspond to the respective amount of memory; processing in each sub-batch iterations requires a movement in each layer window, which corresponds to determining a smoothed amount of memory based on a moving window function) identifying, by the processing circuitry, transitions between adjacent layers, including comparing the smoothed amount of memory to a dynamically determined memory change threshold amount that is based on available memory of the device(Lym [p.1-2] MBS optimizes sub-batch sizes and layer grouping to balance data reuse between layers with reuse of parameters (weights) within a layer—weights are re-read for every sub-batch; [p.2 2nd col 2nd ¶] In both phases there is direct producer-consumer locality between layers—inter-layer data that can be buffered if it is not too large. [p.7 left ¶3] Our baseline WaveCore uses a single HBM2 stack with 4 dice (Joi, 2016), which provides 8GiB off-chip DRAM with 300GiB/s data bandwidth over 8 channels (4 channels per core). [abstract] We find that bandwidth today is over-provisioned because most memory accesses in CNN training can be eliminated by rearranging computation to better utilize on-chip buffers and avoid traffic resulting from large per-layer memory footprints.) (Note: identifying where the inter-layer data is too large corresponds to identifying transition between adjacent layers more than the threshold amount of memory change; the off-chip DRAM corresponds to the available memory of the device; rearranging based on the per-layer memory footprints corresponds to dynamically determining the memory change threshold amount.); grouping, by the processing circuitry, the multiple layers of the machine learning network into a first layer grouping based on the identified transitions… (Lym [p.4 left ¶2] The mini-batch is then processed in several sub-batch iterations… [p.3 2nd col. Last ¶] Then, layer groups are merged to improve overall locality: groups are merged by reducing the sub-batch size of one group to that of an adjacent group) (Note: the sub-batch iterations correspond to the identified transitions; merging involves reducing the sub-batch size with the adjacent groups, which corresponds to the grouping of the multiple layers based on the identified transitions.) …to reduce memory transfer overhead between different levels of a memory hierarchy of the device (Lym [abstract] We find that bandwidth today is over-provisioned because most memory accesses in CNN training can be eliminated by rearranging computation to better utilize on-chip buffers and avoid traffic resulting from large per-layer memory footprints… [fig.1] PNG media_image4.png 316 402 media_image4.png Greyscale ) (Note: the on-chip memory and the off-chip memory corresponds to the different levels of memory hierarchy of the device) and outputting, by the processing circuitry, the first layer grouping configured to cause a respective set of layers of a group of the first layer grouping to complete prior to execution of a subsequent group of the first layer grouping (Lym PNG media_image5.png 296 373 media_image5.png Greyscale ) (Note: the data flow from Group 1 to Group 2 corresponds to the first layer grouping completing prior to execution of a subsequent group of the first layer grouping.) Regarding claim 2, which is dependent on claim 1, Lym further teaches: The method of claim 1, further comprising: modeling the machine learning network based on the first layer grouping (Lym [p.3] Optimizing layer groups balances intra-and inter-layer locality tradeoffs. The MBS algorithm forms initial layer groups by grouping adjacent layers that require the same number of sub-batch iterations) [Note: In the main application specs: While this example CNN includes two layers, it may be understood that other CNNs can include any number of layers. Having adjacent layers implies there exists a first layer grouping.]; associating a first cost with the first layer grouping; generating a second layer grouping by adjusting a group boundary of the first layer grouping (Lym [p.3] This is shown in Fig. 4 where grey vertical bars represent the data volume required for the inter-layer data per layer (or one multi-branch module block) of ResNet50, and the red line represents the resulting minimal sub-batch iteration count for each layer; [fig 4] red line = min. iterations = cost; [fig 5] the boundaries of g1 and g2 are being changed ); modeling the machine learning network based on the second layer grouping; associating a second cost with the second layer grouping (Lym [p.3-4] Then, layer groups are merged to improve overall locality: groups are merged by reducing the sub-batch size of one group to that of an adjacent group. The first group then requires more iterations (with more weight and gradient accesses), but inter-layer reuse increases across the two layers where the groups meet.); and outputting a lower cost layer grouping based on a comparison between the first cost and the second cost (Lym [p.3] The MBS algorithm forms initial layer groups by grouping adjacent layers that require the same number of sub-batch iterations). Regarding claim 3, which is dependent on claim 2, Lym further teaches: The method of claim 2, wherein the first and second costs are based on at least one of expected number of memory accesses or processing cycles (Lym [p.7] To avoid duplicated data loads from the global buffer, we have memory load coalescing units that maintain high effective bus bandwidth utilization.). Regarding claim 4, which is dependent on claim 2, Lym further teaches: The method of claim 2, wherein the group boundary is adjusted within a predefined range of values around the group boundary (Lym [Tab.3] MBS1: IL + greedy layer grouping [footnote1] We also experimented with an optimal grouping of layers using exhaustive search, which improved traffic and performance by roughly 1% compared to our greedy optimization.). [Note: A greedy search is a non-exhaustive search (using pre-defined values that are at most the size of the array), which implies that Lym uses a rolling window to find the local optima for the layer grouping.] Regarding claim 5, which is dependent on claim 1, Lym further teaches: The method of claim 1, wherein the first layer grouping comprises a first set of layers and a second set of layers (Lym [Fig 5] group 1, group 2). Regarding claim 6, which is dependent on claim 5, Lym further teaches: The method of claim 5, wherein a first number of layers of the first set of layers differs from a second number of layers of the second set of layers (Lym [Fig 5] group 1, group 2 have different number of layers). Regarding claim 7, which is dependent on claim 1, Lym further teaches: The method of claim 1, further comprising: determining a minimum number of tiles for the layers of the first layer grouping based on the amount of memory used by the layers; determining a number of tiles for a last layer of the first layer grouping based on the minimum number of tiles; and determining the number of tiles for other layers of the first layer grouping based on the number of tiles for the last layer (Lym [p.3] MBS goes much further and balances locality of intra-layer weight reuse and parallelism with inter-layer locality. We do this by varying the number of samples per sub-batch across layers such that layers that can support more samples require fewer iterations and can benefit from the greater parallelism and locality). [In spec: “a minimum number of tiles (e.g., passes) needed to process the layer while keeping memory usage of the tile within the amount of memory available on the target hardware resource may be determined.”] Independent Claim 8 recites a non-transitory program storage device comprising instructions stored thereon to perform precisely the methods of Claim 1. Claims 9-14, dependent on Claim 8, are rejected for reasons set forth in Claims 2-7, respectively. A device, comprising: a memory; and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute non-transitory instructions causing the one or more processors to: (Lym [p.10 2nd col 2nd ¶] MBS reconfigures the CNN computation graph by partitioning a mini-batch of samples into sub-batches whose memory footprint fits within on-chip storage.) (Note: CNN computation corresponds to one or more processors executing non-transitory instructions) determine a smoothed amount of memory for each of the layers based on a number of adjacent layers; (Lym [p.3 2nd col. Last ¶] The MBS algorithm forms initial layer groups by grouping adjacent layers that require the same number of sub-batch iterations. This is shown in Fig. 4 where grey vertical bars represent the data volume required for the inter-layer data per layer (or one multi-branch module block) of ResNet50, and the red line represents the resulting minimal sub-batch iteration count for each layer. PNG media_image6.png 385 662 media_image6.png Greyscale ) (Note: The initial layer groups correspond to the determined smooth amount of memory. The inter-layer data size corresponds to the respective amount of memory.) identify change layers where the respective smoothed amount of memory differs from the respective smoothed memory of an adjacent layer by more than a memory change threshold amount; (Lym [p.2 2nd col 2nd ¶] In both phases there is direct producer-consumer locality between layers—inter-layer data that can be buffered if it is not too large.) (Note: identifying where the inter-layer data is too large corresponds to identifying change layers more than the threshold amount of memory change.) group the layers of the machine learning network into a first layer grouping based on the identified change layers; (Lym [p.3 2nd col. Last ¶] Then, layer groups are merged to improve overall locality: groups are merged by reducing the sub-batch size of one group to that of an adjacent group.) and output the first layer grouping configured to cause groups of the first layer grouping to execute in sequence. (Lym PNG media_image5.png 296 373 media_image5.png Greyscale ) (Note: the data flow from Group 1 to Group 2 corresponds to the layer groupings executing in sequence.) Claims 16-20, dependent on Claim 15, recite the device of claim 15, wherein the instructions further cause the one or more processors to perform precisely the methods of Claims 2-5 and 7, respectively. Thus, they rejected for reasons set forth in Claims 2-5 and 7, respectively. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEP HAN whose telephone number is (703)756-1346. The examiner can normally be reached Mon-Fri 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.H./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

May 24, 2021
Application Filed
Jul 02, 2024
Non-Final Rejection — §101, §102
Nov 01, 2024
Response Filed
Jan 30, 2025
Final Rejection — §101, §102
Jun 06, 2025
Request for Continued Examination
Jun 10, 2025
Response after Non-Final Action
Jul 10, 2025
Non-Final Rejection — §101, §102
Jan 09, 2026
Response Filed
Feb 02, 2026
Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585965
INTERACTIVE MACHINE-LEARNING FRAMEWORK
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
38%
Grant Probability
62%
With Interview (+25.0%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month