Prosecution Insights
Last updated: April 19, 2026
Application No. 18/408,642

APPARATUS AND METHOD FOR CCIX INTERFACE BASED ON USE OF QoS FIELD

Final Rejection §103
Filed
Jan 10, 2024
Examiner
LEE, CHUN KUAN
Art Unit
2181
Tech Center
2100 — Computer Architecture & Software
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
71%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
455 granted / 669 resolved
+13.0% vs TC avg
Minimal +3% lift
Without
With
+3.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
32 currently pending
Career history
701
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
79.4%
+39.4% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 669 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . RESPONSE TO ARGUMENTS Applicant's arguments filed 7/23/2025 have been fully considered but they are not persuasive. In response to applicant’s arguments with regard to the independent claims 1, 9 and 20 rejected under 35 U.S.C. 103(a) that the combination of the references does not teach/suggest the claimed feature “… a priority of a CCIX protocol message is preset based on a type of a command using a QoS field of a CCIX interface format, and the CCIX protocol message is transmitted to or received from the computational accelerator through the at least one CCIX port based on the preset priority …” because none of the cited references teach/suggest the above claimed feature; applicant's arguments have fully been considered, but are not found to be persuasive. The examiner respectfully disagrees, and to further clarify, by combining Dastidar’s operating with the computational accelerator (col. 3, ll. 18-33; and col. 4, ll. 40-50) and Ng’s priority of message is preset based on a type of a command using a QoS field, and operating based on the preset priority (Fig. 6-7; and [0026]-[0031]) with YANG’s CCIX protocol message is based on message of a CCIX interface format, and the CCIX protocol messages is transmitted to or received from the computational element through the at least one CCIX port based the CCIX interface format (e.g. associated with communication between host computing device (102) and slave computing device (104) via CCIX data bus (103): Fig. 1; [0021]-[0022]), the resulting combination of the references would further teaches/suggests the above claimed features. I. REJECTIONS BASED ON PRIOR ART Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over YANG et al. (US Pub.: 2022/0121383) in view of Dastidar et al. (US Patent 11,074,208) and Ng et al. (US Pub.: 2020/0192842). As per claim 1, YANG teaches/suggests an apparatus for a cache coherent interconnect for accelerators (CCIX) interface comprising: a host processor operating as part of a CCIX protocol (e.g. associated with host processor (106) communicating via CCIX data bus (103): Fig. 1; [0021]-[0022]); and at least one CCIX port (e.g. associated with port on computing device (102) for CCIX data bus (103): Fig. 1; [0021]-[0022]) for an interface with at least one computational element operating part of the CCIX protocol (e.g. associated with slave computing device (104) communicating with host computing device (102) via CCIX data bus (103): Fig. 1; [0021]-[0022]), wherein a CCIX protocol message is based on message of a CCIX interface format, and the CCIX protocol messages is transmitted to or received from the computational element through the at least one CCIX port based the CCIX interface format (e.g. associated with communication between host computing device (102) and slave computing device (104) via CCIX data bus (103): Fig. 1; [0021]-[0022]). YANG does not teach the apparatus comprising: operating as a Home Agent (HA); and computational accelerator operating as a Request Agent (RA), having a priority of message is preset based on a type of a command using a QoS field, and operating with the computational accelerator based on the preset priority. Dastidar teaches/suggests an apparatus comprising: operating as a Home Agent (HA); and computational accelerator operating accordingly, operating with the computational accelerator (col. 3, ll. 18-33; and col. 4, ll. 40-50). Ng teaches/suggests an apparatus comprising: operating as a Request Agent (RA) ([0023]), having a priority of message is preset based on a type of a command using a QoS field, and operating based on the preset priority (Fig. 6-7; and [0026]-[0031]). It would have been obvious for one of ordinary skill in this art, before the effective filing date of the claimed invention, to include Dastidar’s HA architecture and Ng’s CCIX architecture into YANG’s apparatus for the benefit of improving overall performance of the system via dynamic mapping and remapping scheme (Dastidar, col. 3, ll. 34-47) and providing a more efficient communication overhand (Ng, [0035]) to obtain the invention as specified in claim 1. As per claim 2, YANG, Dastidar, and Ng teach/suggest all the claimed features of claim 1 above, where YANG, Dastidar, and Ng further teach/suggest the apparatus comprising wherein the priority for QoS is in an order of a ‘Dataless’ command, an ‘Atomics’ command, a ‘Reads’ command, and a ‘Writes’ command (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; and Ng, Fig. 6-7; [0023]; [0026]-[0031]), wherein it would have been an obvious design choice to one of ordinary skilled in the art to prioritize different commands accordingly (de Varies et al. (US Pub.: 2024/0143224): [0053]) . As per claim 3, YANG, Dastidar, and Ng teach/suggest all the claimed features of claim 2 above, where YANG, Dastidar, and Ng further teach/suggest the apparatus comprising wherein the priority for QoS is in an order of transfer from the home agent to the request agent and transfer from the request agent to the home agent (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; and Ng, Fig. 6-7; [0023]; [0026]-[0031]), wherein it is obvious and/or well-known that QoS provide an order of transfer between processing elements. As per claim 4, YANG, Dastidar, and Ng teach/suggest all the claimed features of claim 1 above, where YANG, Dastidar, and Ng further teach/suggest the apparatus comprising wherein: the at least one CCIX port and a CCIX port of the at least one computational accelerator corresponding thereto form two virtual channels therebetween, and the two virtual channels include a first channel for exchanging data through Direct Memory Access (DMA) and a second channel for exchanging the CCIX protocol messages (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; and Ng, Fig. 6-7; [0023]; [0026]-[0031]), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features of virtual channels when conforming to CCIX protocol (CCIX Base Specification Revision 1.0a Version 1.0 for Evaluation: Section 4.2.1 on pages 122-123) and the use of DMA for transferring data with memory. As per claim 5, YANG, Dastidar, and Ng teach/suggest all the claimed features of claim 1 above, where YANG, Dastidar, and Ng further teach/suggest the apparatus comprising wherein the host processor includes a System Level Cache (SLC) used as a cache of internal cores, performs synchronization of the SLC, and includes at least one CCIX port corresponding to each of the at least one computational accelerator (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; and Ng, Fig. 6-7; [0023]; [0026]-[0031]), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features when utilizing CCIX protocol. As per claim 6, YANG, Dastidar, and Ng teach/suggest all the claimed features of claim 1 above, where YANG, Dastidar, and Ng further teach/suggest the apparatus comprising wherein the at least one computational accelerator includes: shared memory that is a cache for sharing data with the host processor, an address translation service block for determining a physical memory address of the shared memory by receiving a virtual memory address from the host processor through the CCIX port and for writing data to the shared memory using the determined physical memory address, and a hardware accelerator for processing required data by accessing a location of the shared memory at which writing is completed and for writing a processing result value at a preset address location, and the processing result value is transferred to the host processor through a Message Signaled Interrupt (MSI) message (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; and Ng, Fig. 6-7; [0023]; [0026]-[0031]), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features when utilizing CCIX protocol. As per claim 7, YANG, Dastidar, and Ng teach/suggest all the claimed features of claim 6 above, where YANG, Dastidar, and Ng further teach/suggest the apparatus comprising wherein the at least one computational accelerator operates with a cache hierarchy including an L1 cache and the shared memory, which is an L2 cache, rather than a single piece of shared memory (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; and Ng, Fig. 6-7; [0023]; [0026]-[0031]), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features when utilizing CCIX protocol. As per claim 8, YANG, Dastidar, and Ng teach/suggest all the claimed features of claim 6 above, where YANG, Dastidar, and Ng further teach/suggest the apparatus comprising wherein the host processor generates the virtual memory address and sends the virtual memory address through the CCIX port in order to access the shared memory of the at least one computational accelerator, and updates a system level cache by reading the processing result value from the preset address location of the shared memory when the MSI message including the processing result value is received (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; and Ng, Fig. 6-7; [0023]; [0026]-[0031]), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features when utilizing CCIX protocol. As per claim 20, YANG teaches/suggests a method for a cache coherent interconnect for accelerators (CCIX) interface comprising: having a CCIX protocol message based on a CCIX interface format (e.g. associated with communication between host computing device (102) and slave computing device (104) via CCIX data bus (103)); and transmitting or receiving the CCIX protocol message between a host processor (e.g. associated with Fig. 1, ref. 106) of a CCIX protocol and another device (e.g. associated with Fig. 1, ref. 104) connected with the host processor through a CCIX port (e.g. associated with communication between host computing device (102) and slave computing device (104) via CCIX data bus (103)) (Fig. 1; and [0021]-[0022]). YANG does not teach the method comprising: determining a priority of message based on a type of a command using a QoS field of format; and operating based on the determined priority when operating as a Home Agent (HA). Dastidar teaches/suggests a method comprising: operating as a Home Agent (HA) (col. 3, ll. 18-33; and col. 4, ll. 40-50). Ng teaches/suggests a method comprising: determining a priority of message based on a type of a command using a QoS field of format; and operating based on the determined priority when operating (Fig. 6-7; and [0026]-[0031]). It would have been obvious for one of ordinary skill in this art, before the effective filing date of the claimed invention, to include Dastidar’s HA architecture and Ng’s CCIX architecture into YANG’s apparatus for the benefit of improving overall performance of the system via dynamic mapping and remapping scheme (Dastidar, col. 3, ll. 34-47) and providing a more efficient communication overhand (Ng, [0035]) to obtain the invention as specified in claim 20. Claims 9-19 are rejected under 35 U.S.C. 103 as being unpatentable over YANG et al. (US Pub.: 2022/0121383) in view of Dastidar et al. (US Patent 11,074,208), Ng et al. (US Pub.: 2020/0192842), and Wilt et al. (US Patent 9,542,192). As per claim 9, YANG teaches/suggests an apparatus for a cache coherent interconnect for accelerators (CCIX) interface comprising: a host processor operating as part of a CCIX protocol (e.g. associated with host processor (106) communicating via CCIX data bus (103): Fig. 1; [0021]-[0022]); and a CCIX port (e.g. associated with port on computing device (102) for CCIX data bus (103): Fig. 1; [0021]-[0022]) for an interface with a slave processor operating as part of the CCIX protocol (e.g. associated with slave processor (114) in slave computing device (104) communicating with host computing device (102) via CCIX data bus (103): Fig. 1; [0021]-[0022]), wherein: the host processor and the slave processor are configured to operate accordingly, the apparatus further includes an additional CCIX port for an interface with part of the CCIX protocol (e.g. associated with the host computing device interfaced to more slave computing devices: [0021]), and a CCIX protocol message is based on message of a CCIX interface format, and the CCIX protocol messages is transmitted to or received from module through the at least one CCIX port based the CCIX interface format (e.g. associated with communication between host computing device (102) and slave computing device (104) via CCIX data bus (103) (Fig. 1; and [0021]-[0022]) YANG does not teach the apparatus for a cache coherent interconnect for accelerators (CCIX) interface comprising: operating as a Home Agent (HA); and operating as a Slave Agent (SA); being configured as Symmetric Multiple Processors (SMP), a computational accelerator to which a workload of the SMP is offloaded by operating as a Request Agent (RA), and a priority of message is preset based on a type of a command using a QoS field of format, and operating with the computational accelerator based on the preset priority. Dastidar teaches/suggests an apparatus comprising: operating as a Home Agent (HA); operating as a Slave Agent (SA); and a computational accelerator to operate accordingly, and operating with the computational accelerator (col. 3, ll. 18-33; and col. 4, ll. 40-50). Ng teaches/suggests an apparatus comprising: operating as a Request Agent (RA) ([0023]), and a priority of message is preset based on a type of a command using a QoS field of format, and operating based on the preset priority (Fig. 6-7; and [0026]-[0031]). Wilt teaches/suggests an apparatus comprising: being configured as Symmetric Multiple Processors (SMP), to which a workload of the SMP is offloaded by operating accordingly (col. 8, l. 58 to col. 9, l. 5; and col. 10, ll. 18-29). It would have been obvious for one of ordinary skill in this art, before the effective filing date of the claimed invention, to include Dastidar’s HA architecture, Ng’s CCIX architecture and Wilt’s offloading into YANG’s apparatus for the benefit of improving overall performance of the system via dynamic mapping and remapping scheme (Dastidar, col. 3, ll. 34-47), providing a more efficient communication overhand (Ng, [0035]), and decreasing the amount of time needed to finish execution (Wilt, col. 10, ll. 9-12) to obtain the invention as specified in claim 9. As per claim 10, YANG, Dastidar, Ng, and Wilt teach/suggest all the claimed features of claim 9 above, where YANG, Dastidar, Ng, and Wilt further teach/suggest the apparatus comprising wherein the priority for QoS is in an order of a ‘Dataless’ command, an ‘Atomics’ command, a ‘Reads’ command, and a ‘Writes’ command (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; Ng, Fig. 6-7; [0023]; [0026]-[0031]; Wilt, col. 8, l. 58 to col. 9, l. 5; col. 10, ll. 18-29), wherein it would have been an obvious design choice to one of ordinary skilled in the art to prioritize different commands accordingly (de Varies et al. (US Pub.: 2024/0143224): [0053]) . As per claim 11, YANG, Dastidar, Ng, and Wilt teach/suggest all the claimed features of claim 10 above, where YANG, Dastidar, Ng, and Wilt further teach/suggest the apparatus comprising wherein the priority for QoS is in an order of transfer from the home agent to the slave agent and transfer from the slave agent to the home agent (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; Ng, Fig. 6-7; [0023]; [0026]-[0031]; Wilt, col. 8, l. 58 to col. 9, l. 5; col. 10, ll. 18-29), wherein it is obvious and/or well-known that QoS provide an order of transfer between processing elements. As per claim 12, YANG, Dastidar, Ng, and Wilt teach/suggest all the claimed features of claim 9 above, where YANG, Dastidar, Ng, and Wilt further teach/suggest the apparatus comprising wherein: the CCIX port for the interface with the slave processor and a CCIX port of the slave processor form two virtual channels therebetween, the additional CCIX port for the interface with the computational accelerator and a CCIX port of the computational accelerator form two virtual channels therebetween, and the two virtual channels include a first channel for exchanging data through Direct Memory Access (DMA) and a second channel for exchanging the CCIX protocol messages (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; Ng, Fig. 6-7; [0023]; [0026]-[0031]; Wilt, col. 8, l. 58 to col. 9, l. 5; col. 10, ll. 18-29), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features of virtual channels when conforming to CCIX protocol (CCIX Base Specification Revision 1.0a Version 1.0 for Evaluation: Section 4.2.1 on pages 122-123) and the use of DMA for transferring data with memory. As per claim 13, YANG, Dastidar, Ng, and Wilt teach/suggest all the claimed features of claim 9 above, where YANG, Dastidar, Ng, and Wilt further teach/suggest the apparatus comprising wherein the host processor includes: local memory, a System Level Cache (SLC) shared by cores of the host processor, a CCIXO port for a CCIX connection with the slave processor, and a CCIX1 port for a CCIX connection with the computational accelerator (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; Ng, Fig. 6-7; [0023]; [0026]-[0031]; Wilt, col. 8, l. 58 to col. 9, l. 5; col. 10, ll. 18-29), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features when utilizing CCIX protocol. As per claim 14, YANG, Dastidar, Ng, and Wilt teach/suggest all the claimed features of claim 9 above, where YANG, Dastidar, Ng, and Wilt further teach/suggest the apparatus comprising wherein the slave processor includes: local memory, a system level cache shared by cores of the slave processor, a CCIXO port for a CCIX connection with the host processor, and a CCIX1 port for a CCIX connection with the computational accelerator (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; Ng, Fig. 6-7; [0023]; [0026]-[0031]; Wilt, col. 8, l. 58 to col. 9, l. 5; col. 10, ll. 18-29), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features when utilizing CCIX protocol. As per claim 15, YANG, Dastidar, Ng, and Wilt teach/suggest all the claimed features of claim 9 above, where YANG, Dastidar, Ng, and Wilt further teach/suggest the apparatus comprising wherein the computational accelerator includes: a CCIX port for a CCIX connection with the host processor, an additional CCIX port for a CCIX connection with the slave processor, shared memory that is an L2 cache for sharing data with the host processor and the slave processor, an L1 cache, an address translation service block for determining a physical memory address of the shared memory by receiving a virtual memory address from the host processor or the slave processor through the CCIX port or the additional CCIX port and for writing data to the shared memory using the determined physical memory address, and a hardware accelerator for processing required data by accessing a location of the shared memory at which writing is completed and for writing a processing result value at a preset address location (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; Ng, Fig. 6-7; [0023]; [0026]-[0031]; Wilt, col. 8, l. 58 to col. 9, l. 5; col. 10, ll. 18-29). As per claim 16, YANG, Dastidar, Ng, and Wilt teach/suggest all the claimed features of claim 9 above, where YANG, Dastidar, Ng, and Wilt further teach/suggest the apparatus comprising wherein the host processor and the slave processor maintain a cache coherency state between system level caches by being connected through the CCIX port (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; Ng, Fig. 6-7; [0023]; [0026]-[0031]; Wilt, col. 8, l. 58 to col. 9, l. 5; col. 10, ll. 18-29), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features when utilizing CCIX protocol. As per claim 17, YANG, Dastidar, Ng, and Wilt teach/suggest all the claimed features of claim 13 above, where YANG, Dastidar, Ng, and Wilt further teach/suggest the apparatus comprising wherein the host processor has all system memory maps including a memory map thereof, a memory map of the slave processor, and a memory map of the computational accelerator (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; Ng, Fig. 6-7; [0023]; [0026]-[0031]; Wilt, col. 8, l. 58 to col. 9, l. 5; col. 10, ll. 18-29), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features when utilizing CCIX protocol. As per claim 18, YANG, Dastidar, Ng, and Wilt teach/suggest all the claimed features of claim 17 above, where YANG, Dastidar, Ng, and Wilt further teach/suggest the apparatus comprising wherein the slave processor performs operation within a memory map assigned by the host processor and notifies the host processor of a change in a memory value that is made as a result of a memory request/response generated within an address range thereof (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; Ng, Fig. 6-7; [0023]; [0026]-[0031]; Wilt, col. 8, l. 58 to col. 9, l. 5; col. 10, ll. 18-29), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features when utilizing CCIX protocol. As per claim 19, YANG, Dastidar, Ng, and Wilt teach/suggest all the claimed features of claim 18 above, where YANG, Dastidar, Ng, and Wilt further teach/suggest the apparatus comprising wherein in response to notification of the change in the memory value from the slave processor, the host processor generates a cache snooping packet, thereby automatically updating the system level cache thereof, a system level cache of the slave processor, and shared memory of the computational accelerator when it is necessary to update cache values due to the change in the memory value (YANG, Fig. 1; [0021]-[0022]; Dastidar, col. 3, ll. 18-33; col. 4, ll. 40-50; Ng, Fig. 6-7; [0023]; [0026]-[0031]; Wilt, col. 8, l. 58 to col. 9, l. 5; col. 10, ll. 18-29), wherein it would have been obvious to one ordinary skilled in the art to further implement the above claimed features when utilizing CCIX protocol. II. PERTINENT RELATED PRIOR ART Shin et al. (US Pub.: 2021/0152180): discloses data packets communicated over cache coherent interconnect for accelerators (CCIX) interfaces include Quality-of-Service (QoS) information that allows the transmission of data packets to be prioritized. III. CLOSING COMMENTS CONCLUSION STATUS OF CLAIMS IN THE APPLICATION The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): CLAIMS REJECTED IN THE APPLICATION Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. DIRECTION OF FUTURE CORRESPONDENCES Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHUN KUAN LEE whose telephone number is (571)272-0671. The examiner can normally be reached Monday-Friday. IMPORTANT NOTE If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Idriss Alrobaye can be reached on (571) 270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHUN KUAN LEE/Primary Examiner Art Unit 2181 August 19, 2025
Read full office action

Prosecution Timeline

Jan 10, 2024
Application Filed
Apr 19, 2025
Non-Final Rejection — §103
Jul 23, 2025
Response Filed
Aug 20, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602270
KV-CACHE STREAMING FOR IMPROVED PERFORMANCE AND FAULT TOLERANCE IN GENERATIVE MODEL SERVING
2y 5m to grant Granted Apr 14, 2026
Patent 12596659
METHODS, DEVICES AND SYSTEMS FOR HIGH SPEED TRANSACTIONS WITH NONVOLATILE MEMORY ON A DOUBLE DATA RATE MEMORY BUS
2y 5m to grant Granted Apr 07, 2026
Patent 12579080
OUTPUT METHOD AND DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12579089
DATA PROCESSING METHOD, APPARATUS AND SYSTEM BASED ON PARA-VIRTUALIZATION DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12554540
EVENT PROCESSING BY HARDWARE ACCELERATOR
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
71%
With Interview (+3.1%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 669 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month