Prosecution Insights
Last updated: April 19, 2026
Application No. 17/798,373

DATA COMMUNICATION BETWEEN A HOST COMPUTER AND AN FPGA

Final Rejection §103
Filed
Aug 09, 2022
Examiner
LEE, CHUN KUAN
Art Unit
2181
Tech Center
2100 — Computer Architecture & Software
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
6 (Final)
68%
Grant Probability
Favorable
7-8
OA Rounds
3y 4m
To Grant
71%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
455 granted / 669 resolved
+13.0% vs TC avg
Minimal +3% lift
Without
With
+3.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
32 currently pending
Career history
701
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
79.4%
+39.4% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 669 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . RESPONSE TO ARGUMENTS Applicant’s arguments with respect to claims 1-2, 4-10 and 21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. I. REJECTIONS BASED ON PRIOR ART Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-10, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Jacobsen et al. (RIFFA 2.1 from IDS dated 8/9/2022) in view of Zhao et al. (US Pub.: 2020/0304426) and YI (US Pub.: 2017/0123974). As per claim 1, Jacobsen teaches/suggests a method for data communication between applications of a host computer and partitions of resources of an FPGA, each partition being configured to serve a respective one of the applications (e.g. associated with communication channels between software threads on the CPU and the user cores on the FPGA: Fig. 3; Fig. 7-8; Section 2 on pages 22:3 to 22:4), and the host computer being configured to run the applications (e.g. associated with user application in Fig. 3), the method being performed by the host computer, the method comprising: communicating, over a PCIe interface provided between the host computer and the FPGA (e.g. associated with PCI Express Link on Fig. 3), data between the applications and the partitions of resources (e.g. associated with architecture running in both upstream and downstream directions: Section 3 on pages 22:9), wherein the communicating comprises: concatenating data from two or more of the applications into a single fixed-size PCIe data transaction (e.g. associated with simultaneously accessing the cores by the software threads on the CPU via a single cycle over 128-bit interface that contains different PCIe transactions : Section 2 on page 22:3 and Section 3.1 on pages 22:9 to 22:10); and transmitting the single fixed-size PCIe data transaction over the PCIe interface in a data transfer cycle (e.g. associated with communicating data from two different PCIe transaction via a single cycle on 128-bit interface: Section 3.1 on pages 22:9 to 22:10); wherein each application share of bandwidth resources of the PCIe interface, and bandwidth resources of the PCIe interface are distributed between the applications (Fig. 3; Fig. 7-8; Section 2 on pages 22:3 to 22:4; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; and Section 3.2 to 3.2.2 on pages 22:15 to 22:18). Jacobsen does not teach the method comprising writing start and end offsets of data into a register file, the start and end offsets indicating locations; and being allocated its own configured, and all bandwidth resources of the PCIe interface are operating according to all the configured shares of bandwidth resources when the data is communicated. Zhao teaches/suggests a method comprising: having corresponding location (e.g. associated with lane buffer (123) in Fig. 2); being allocated its own configured (e.g. associated with dynamic allocation of bandwidth for accelerator: [0044]), and all bandwidth resources of the PCIe interface are operating according to all the configured shares of bandwidth resources when the data is communicated (e.g. associated with bandwidth in root-interconnect communication channel (121) being equally shared by endpoints: [0022]-[0023]; and [0031]) (Fig. 2-3; [0019]-[0033]; [0038]; [0044]; and [0051]-[0055]). YI teaches/suggests a method comprising: writing start and end offsets of data into a register file, the start and end offsets indicating locations (e.g. equate to register storing information on the allocated area of the buffer memory: [0130]-[0132]) It would have been obvious for one of ordinary skill in this art, before the effective filing date of the claimed invention, to include Zhao’s dynamic bandwidth allocation and YI’s buffer management into Jacobsen’s method for the benefit of providing quality of service support on memory communication for accelerators (Zhao, [0025]), and securing the integrity of data being transferred (YI, [0130]) to obtain the invention as specified in claim 1. As per claim 2, Jacobsen, Zhao and YI teach/suggest all the claimed features of claim 1 above, where Jacobsen and Zhao further teach/suggest the method further comprising: allocating the bandwidth resources of the PCIe interface to the applications according to the configured shares of bandwidth resources before the data is communicated (e.g. associated with Fig. 6, ref. 1060 of Zhao); wherein the data, per each data transfer cycle, is either communicated from the host computer to the FPGA or from the FPGA to the host computer (e.g. associated with communication in both upstream and downstream directions: Section 3 on page 22:9 of Jacobsen) (Jacobsen, Fig. 3; Fig. 7-8; Section 2 on pages 22:3 to 22:4; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2.1 to 3.2.2 on pages 22:16 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]). As per claim 4, Jacobsen, Zhao and YI teach/suggest all the claimed features of claim 2 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein one fixed-size PCIe data transaction is communicated per each data transfer cycle (e.g. associated with particular/fixed format of data is used when communicating in accordance to PCIe protocol form one cycle to the next cycle: Jacobsen, Section 3.1 on pages 22:9 to 22:10) (Jacobsen, Fig. 3; Fig. 7-8; Section 2 on pages 22:3 to 22:4; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2.1 to 3.2.2 on pages 22:16 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]). As per claim 5, Jacobsen, Zhao and YI teach/suggest all the claimed features of claim 2 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein all bandwidth resources of the PCIe interface, per data transfer cycle, collectively define the fixed-size PCIe data transaction, and wherein each configured share of bandwidth resources is by the host computer translated to read/write offsets within the fixed-size PCIe data transaction (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]). As per claim 6, Jacobsen, Zhao and YI teach/suggest all the claimed features of claim 5 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein the read/write offsets are communicated from the host computer to the FPGA (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]). As per claim 7, Jacobsen, Zhao and YI teach/suggest all the claimed features of claim 4 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein communicating the data, between the host computer and the FPGA, comprises converting one fixed-size PCIe data transaction per each data transfer cycle into at least two direct memory access requests (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]), wherein it would have been obvious design choice to one of ordinary skilled in the art to further implement the above claimed features as data is communicated over the PCI interface. As per claim 8, Jacobsen, Zhao and YI teach/suggest all the claimed features of claim 7 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein the PCIe interface is composed of direct memory access channels, and wherein there are at least as many direct memory access requests as there are direct memory access channels (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]), wherein it would have been obvious design choice to one of ordinary skilled in the art to further implement the above claimed features as data is communicated over the PCI interface. As per claim 9, Jacobsen, Zhao and YI teach/suggest all the claimed features of claim 8 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein the at least two direct memory access requests are instantiated in parallel across all the direct memory access channels, and wherein the data is distributed among the direct memory access channels according to the configured shares of bandwidth resources (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]), wherein it would have been obvious design choice to one of ordinary skilled in the art to further implement the above claimed features as data is communicated over the PCI interface. As per claim 10, Jacobsen, Zhao and YI teach/suggest all the claimed features of claim 1 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein the bandwidth resources of the PCIe interface are given in units of 32 bytes per data transfer cycle, and wherein each configured share of bandwidth resources of the PCIe interface is given as a multiple of 32 bytes (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]), wherein it would have been obvious design choice to one of ordinary skilled in the art to further implement the above claimed features as data is communicated over the PCI interface. As per claim 22, claim 22 is rejected in accordance to the same rational and reasoning as the above rejection of claim 1. Claims 11, 13-20, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Jacobsen et al. (RIFFA 2.1 from IDS dated 8/9/2022) in view of Zhao et al. (US Pub.: 2020/0304426). As per claim 11, Jacobsen teaches/suggests a method for data communication between partitions of resources of an FPGA and applications of a host computer, each partition being configured to serve a respective one of the applications (e.g. associated with communication channels between software threads on the CPU and the user cores on the FPGA: Fig. 3; Fig. 7-8; Section 2 on pages 22:3 to 22:4), and the host computer being configured to run the applications (e.g. associated with user application in Fig. 3), the method being performed by the FPGA, the method comprising: communicating, over a PCIe interface provided between the FPGA and the host computer (e.g. associated with PCI Express Link on Fig. 3), data between the applications and the partitions of resources (e.g. associated with architecture running in both upstream and downstream directions: Section 3 on pages 22:9), wherein the communicating comprises: receiving concatenated data from two or more of the applications into a single fixed-size PCIe data transaction (e.g. associated with simultaneously accessing the cores by the software threads on the CPU via a single cycle over 128-bit interface that contains different PCIe transactions: Section 2 on page 22:3 and Section 3.1 on pages 22:9 to 22:10); and receiving the single fixed-size PCIe data transaction over the PCIe interface in a data transfer cycle (e.g. associated with communicating data from two different PCIe transaction via a single cycle on 128-bit interface: Section 3.1 on pages 22:9 to 22:10); wherein each application share of bandwidth resources of the PCIe interface, and bandwidth resources of the PCIe interface are distributed between the applications (Fig. 3; Fig. 7-8; Section 2 on pages 22:3 to 22:4; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; and Section 3.2 to 3.2.2 on pages 22:15 to 22:18). Jacobsen does not teach the method comprising being allocated its own configured, and all bandwidth resources of the PCIe interface are operating according to all the configured shares of bandwidth resources when the data is communicated. Zhao teaches/suggests a method comprising: having corresponding location (e.g. associated with lane buffer (123) in Fig. 2); being allocated its own configured (e.g. associated with dynamic allocation of bandwidth for accelerator: [0044]), and all bandwidth resources of the PCIe interface are operating according to all the configured shares of bandwidth resources when the data is communicated (e.g. associated with bandwidth in root-interconnect communication channel (121) being equally shared by endpoints: [0022]-[0023]; and [0031]) (Fig. 2-3; [0019]-[0033]; [0038]; [0044]; and [0051]-[0055]). It would have been obvious for one of ordinary skill in this art, before the effective filing date of the claimed invention, to include Zhao’s dynamic bandwidth allocation into Jacobsen’s method for the benefit of providing quality of service support on memory communication for accelerators (Zhao, [0025]) to obtain the invention as specified in claim 11. As per claim 13, Jacobsen and Zhao teach/suggest all the claimed features of claim 11 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein one fixed-size PCIe data transaction is communicated per each data transfer cycle (e.g. associated with particular/fixed format of data is used when communicating in accordance to PCIe protocol form one cycle to the next cycle: Jacobsen, Section 3.1 on pages 22:9 to 22:10) (Jacobsen, Fig. 3; Fig. 7-8; Section 2 on pages 22:3 to 22:4; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2.1 to 3.2.2 on pages 22:16 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]). As per claim 14, Jacobsen and Zhao teach/suggest all the claimed features of claim 11 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein all bandwidth resources of the PCIe interface, per data transfer cycle, collectively define the fixed-size PCIe data transaction, and wherein each configured share of bandwidth resources corresponds to read/write offsets within the fixed-size PCIe data transaction (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]). As per claim 15, Jacobsen and Zhao teach/suggest all the claimed features of claim 14 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein the read/write offsets are communicated to the FPGA from the host computer and written by the FPGA in a register file (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]). As per claim 16, Jacobsen and Zhao teach/suggest all the claimed features of claim 15 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein, for data communicated from the host computer to the FPGA, the data is distributed to the partitions according to the write offsets in the register file (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]), wherein it would have been obvious to one of ordinary skilled in the art to further implement the above claimed features as data is communicated over the PCI interface. As per claim 17, Jacobsen and Zhao teach/suggest all the claimed features of claim 11 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein the FPGA comprises a double buffer, and wherein the data is reordered in a double buffer (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:12; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]), wherein it would have been obvious design choice to one of ordinary skilled in the art to further implement the above claimed features as data is communicated over the PCI interface. As per claim 18, Jacobsen and Zhao teach/suggest all the claimed features of claim 14 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein, for data communicated from the host computer to the FPGA, the data is reordered according to the write offsets in the register file before being distributed to the partitions (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]), wherein it would have been obvious design choice to one of ordinary skilled in the art to further implement the above claimed features as data is communicated over the PCI interface. As per claim 19, Jacobsen and Zhao teach/suggest all the claimed features of claim 14 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein, for data communicated from the FPGA to the host computer, the data is reordered according to the read offsets in the register file before being communicated from the FPGA to the host computer (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]), wherein it would have been obvious design choice to one of ordinary skilled in the art to further implement the above claimed features as data is communicated over the PCI interface. As per claim 20, Jacobsen and Zhao teach/suggest all the claimed features of claim 11 above, where Jacobsen and Zhao further teach/suggest the method comprising: wherein the bandwidth resources of the PCIe interface are given in units of 32 bytes per data transfer cycle, and wherein each configured share of bandwidth resources of the PCIe interface is given as a multiple of 32 bytes (Jacobsen, Fig. 3; Fig. 7-8; Section 2 to 2.1.1 on pages 22:3 to 22:5; Section 2.2 on pages 22:6 to 22:7; Section 3 to 3.1.3 on pages 22:9 to 22:11; Section 3.2 to 3.2.2 on pages 22:15 to 22:18; and Zhao, Fig. 2-3; Fig. 6; [0019]-[0033]; [0038]; [0044]; [0051]-[0055]), wherein it would have been obvious design choice to one of ordinary skilled in the art to further implement the above claimed features as data is communicated over the PCI interface. As per claim 25, claim 25 is rejected in accordance to the same rational and reasoning as the above rejection of claim 11. II. CLOSING COMMENTS CONCLUSION STATUS OF CLAIMS IN THE APPLICATION The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): CLAIMS REJECTED IN THE APPLICATION Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. DIRECTION OF FUTURE CORRESPONDENCES Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHUN KUAN LEE whose telephone number is (571)272-0671. The examiner can normally be reached Monday-Friday. IMPORTANT NOTE If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Idriss Alrobaye can be reached on (571) 270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHUN KUAN LEE/Primary Examiner Art Unit 2181 March 25, 2026
Read full office action

Prosecution Timeline

Aug 09, 2022
Application Filed
Mar 18, 2024
Non-Final Rejection — §103
Jun 21, 2024
Response Filed
Aug 27, 2024
Final Rejection — §103
Oct 30, 2024
Response after Non-Final Action
Dec 02, 2024
Interview Requested
Dec 02, 2024
Request for Continued Examination
Dec 09, 2024
Response after Non-Final Action
Feb 03, 2025
Non-Final Rejection — §103
May 06, 2025
Response Filed
May 29, 2025
Final Rejection — §103
Aug 04, 2025
Response after Non-Final Action
Sep 02, 2025
Request for Continued Examination
Sep 08, 2025
Response after Non-Final Action
Sep 15, 2025
Non-Final Rejection — §103
Nov 17, 2025
Interview Requested
Jan 15, 2026
Applicant Interview (Telephonic)
Jan 20, 2026
Response Filed
Jan 24, 2026
Examiner Interview Summary
Mar 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602270
KV-CACHE STREAMING FOR IMPROVED PERFORMANCE AND FAULT TOLERANCE IN GENERATIVE MODEL SERVING
2y 5m to grant Granted Apr 14, 2026
Patent 12596659
METHODS, DEVICES AND SYSTEMS FOR HIGH SPEED TRANSACTIONS WITH NONVOLATILE MEMORY ON A DOUBLE DATA RATE MEMORY BUS
2y 5m to grant Granted Apr 07, 2026
Patent 12579080
OUTPUT METHOD AND DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12579089
DATA PROCESSING METHOD, APPARATUS AND SYSTEM BASED ON PARA-VIRTUALIZATION DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12554540
EVENT PROCESSING BY HARDWARE ACCELERATOR
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
68%
Grant Probability
71%
With Interview (+3.1%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 669 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month