Prosecution Insights
Last updated: April 19, 2026
Application No. 18/639,698

MULTI-PLANE, MULTI-PROTOCOL MEMORY SWITCH FABRIC WITH CONFIGURABLE TRANSPORT

Final Rejection §DP
Filed
Apr 18, 2024
Examiner
ZAMAN, FAISAL M
Art Unit
2175
Tech Center
2100 — Computer Architecture & Software
Assignee
Enfabrica Corporation
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
81%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
614 granted / 917 resolved
+12.0% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
43 currently pending
Career history
960
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
63.4%
+23.4% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 917 resolved cases

Office Action

§DP
DETAILED ACTION Response to Amendment Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Applicant is advised that this Double Patenting rejection will not be held in abeyance. See MPEP § 804(I)(B)(1); 37 CFR § 1.111(b). Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-11, 13-15, 17, and 18 of U.S. Patent No. 11,995,017. Although the claims at issue are not identical, they are not patentably distinct from each other because all of the features of the instant claims can be found in the conflicting claims, and thus are anticipated by those claims. Instant Claims Claims of U.S. Patent Number 11,995,017 1. A memory switch comprising: a first plurality of switch ports configured to be connected to one or more root complex (RC) devices; a second plurality of switch ports configured to be connected to a set of endpoints; and a bulk data transfer engine configured to facilitate data-exchange between a pair of endpoints in the set of endpoints, wherein to facilitate the data-exchange the bulk data transfer engine is configured to: identify a destination media access control (MAC) address for the pair of endpoints; in response to the destination MAC address being within a range of forwarding addresses, transmit a plurality of data packets for the pair of endpoints, to the destination MAC address via peer- to-peer bulk data transfer (fabQ) communication; and in response to the destination MAC address being outside the range of forwarding addresses, transmit the plurality of data packets for the pair of endpoints to a network port. 1. A multi-plane, multi-protocol memory switch comprising: a plurality of switch ports, the memory switch connectable to one or more root complex (RC) devices through one or more respective switch ports of the plurality of switch ports, and the memory switch connectable to a set of endpoints through a set of other switch ports of the plurality of switch ports, wherein the set includes zero or multiple endpoints; a cacheline exchange engine configured to provide a data-exchange path between two endpoints and to map an address space of one endpoint to an address space of another endpoint, wherein the cacheline exchange engine is configured to: receive a data payload from a source device; write the data payload in a first address space of the source device; map the first address space into a second address space of a destination device, wherein mapping is performed through internal endpoint memory space; and write the data payload in the second address space of the destination device; and a bulk data transfer engine configured to facilitate data-exchange between two endpoints as a source-destination data stream, one endpoint being designated a source address and another endpoint being designated a destination address, wherein facilitating the data-exchange includes bulk data transfer based on a comparison between a forwarding packet header generated based on the source and destination addresses for the data stream and a range of forwarding addresses reserved for the memory switch. 2. The memory switch of claim 1, wherein the bulk data transfer engine communicates with one or more internal endpoints, wherein the bulk data transfer engine is configured to: receive first data from the source device; generate a first transmit header for the first data to form a first data packet, the first transmit header including a first forwarding header comprising a destination media access control (MAC) address; determine whether the destination MAC address included in the first forwarding header of the first data packet is within the range of forwarding addresses reserved for the memory switch; responsive to the destination MAC address being within the range of forwarding addresses, identify a target identifier based on the destination MAC address and the range of forwarding addresses; and transmit the first data packet to the destination device represented by the target identifier via peer-to-peer bulk data transfer (fabQ) communication; and responsive to the destination MAC address being outside the range of forwarding addresses, forward the first data packet to a network port for transmission. 2. The memory switch of claim 1, wherein to facilitate the data-exchange the bulk data transfer engine is further configured to use a comparator circuit based on a contiguous address space reserved for the memory switch. 1. A multi-plane, multi-protocol memory switch comprising: a plurality of switch ports, the memory switch connectable to one or more root complex (RC) devices through one or more respective switch ports of the plurality of switch ports, and the memory switch connectable to a set of endpoints through a set of other switch ports of the plurality of switch ports, wherein the set includes zero or multiple endpoints; a cacheline exchange engine configured to provide a data-exchange path between two endpoints and to map an address space of one endpoint to an address space of another endpoint, wherein the cacheline exchange engine is configured to: receive a data payload from a source device; write the data payload in a first address space of the source device; map the first address space into a second address space of a destination device, wherein mapping is performed through internal endpoint memory space; and write the data payload in the second address space of the destination device; and a bulk data transfer engine configured to facilitate data-exchange between two endpoints as a source-destination data stream, one endpoint being designated a source address and another endpoint being designated a destination address, wherein facilitating the data-exchange includes bulk data transfer based on a comparison between a forwarding packet header generated based on the source and destination addresses for the data stream and a range of forwarding addresses reserved for the memory switch. 4. The memory switch of claim 2, wherein the bulk data transfer engine is further configured to: assign the range of forwarding addresses for a plurality of data flows in a contiguous address space reserved for the memory switch, wherein a base address of the contiguous address space is configurable, and wherein each data flow is associated with a specific source and destination pair. 3. The memory switch of claim 2, wherein the bulk data transfer engine is further configured to: receive first data from a source device; create a first data packet from the first data by associating a first forwarding header with the first data packet, the first forwarding header comprising a destination media access control (MAC) address; transmit the first data packet to a destination device based on the first forwarding header via peer-to-peer bulk data transfer (fabQ) communication when the destination MAC address included in the first forwarding header is within a range of forwarding addresses; and forward the first data packet to a network port for transmission when the destination MAC address is outside the range of forwarding addresses. 2. The memory switch of claim 1, wherein the bulk data transfer engine communicates with one or more internal endpoints, wherein the bulk data transfer engine is configured to: receive first data from the source device; generate a first transmit header for the first data to form a first data packet, the first transmit header including a first forwarding header comprising a destination media access control (MAC) address; determine whether the destination MAC address included in the first forwarding header of the first data packet is within the range of forwarding addresses reserved for the memory switch; responsive to the destination MAC address being within the range of forwarding addresses, identify a target identifier based on the destination MAC address and the range of forwarding addresses; and transmit the first data packet to the destination device represented by the target identifier via peer-to-peer bulk data transfer (fabQ) communication; and responsive to the destination MAC address being outside the range of forwarding addresses, forward the first data packet to a network port for transmission. 4. The memory switch of claim 3, wherein the bulk data transfer engine is further configured to assign the range of forwarding addresses to a plurality of data flows in the contiguous address space reserved for the memory switch, wherein a base address of the contiguous address space is configurable, and wherein each data flow is associated with a respective source-destination pair. 4. The memory switch of claim 2, wherein the bulk data transfer engine is further configured to: assign the range of forwarding addresses for a plurality of data flows in a contiguous address space reserved for the memory switch, wherein a base address of the contiguous address space is configurable, and wherein each data flow is associated with a specific source and destination pair. 5. The memory switch of claim 4, wherein, to transmit the first data packet to the destination device via peer-to-peer fabQ communication, the bulk data transfer engine is further configured to: identify a target identifier representing the destination device by subtracting the base address from the destination MAC address; and transmit the first data packet to the destination device using the target identifier. 5. The memory switch of claim 4, wherein, to identify the target identifier, the bulk data transfer engine is further configured to subtract the base address from the destination MAC address. 6. The memory switch of claim 3, wherein the bulk data transfer engine is further configured to: receive second data; generate a second data packet from the second data; associate the first forwarding header with the second data packet for transmitting the second data packet when the second data packet and the first data packet are from the same source device and have the same destination device. 6. The memory switch of claim 2, wherein the bulk data transfer engine is further configured to: receive second data subsequent to receiving the first data; generate a second data packet to include the second data; determine whether to generate a new forwarding header for the second data packet; and responsive to the second data packet and the first data packet being from the same source device and to the same destination device, associate the first forwarding header with the second data packet without generating the new forwarding header. 7. The memory switch of claim 3, wherein, to transmit the first data packet to the destination device via peer-to-peer fabQ communication, the bulk data transfer engine is further configured to classify and filter the first data packet using ternary content addressable memory (TCAM) to prevent unauthorized access. 7. The memory switch of claim 2, wherein, responsive to transmitting the first data packet to the destination device via peer-to-peer fabQ communication, the bulk data transfer engine is further configured to: classify and filter the first data packet based on ternary content addressable memory (TCAM) to prevent unauthorized access. 8. The memory switch of claim 3, wherein, to transmit the first data packet to the destination device via peer-to-peer fabQ communication, the bulk data transfer engine is further configured to apply one or more network reliability attributes to the first data packet. 8. The memory switch of claim 2, wherein, responsive to transmitting the first data packet to the destination device via peer-to-peer fabQ communication, the bulk data transfer engine is further configured to: apply one or more network reliability attributes to the first data packet. 9. The memory switch of claim 3, wherein the peer-to-peer fabQ communication uses a peripheral component interconnect express/compute express link (PCIe/CXL) communication. 3. The memory switch of claim 2, wherein the peer-to-peer fabQ communication uses a peripheral component interconnect express/compute express link (PCIe/CXL) communication. 10. The memory switch of claim 1, wherein the bulk data transfer engine is configured to communicate with one or more internal endpoints, wherein the one or more internal endpoints are configured as PCIe peers to one or more respective endpoints from the set of endpoints, and wherein the one or more internal endpoints are not electrically attached to one or more respective switch ports from the second plurality of switch ports. 9. The memory switch of claim 2, wherein the one or more internal endpoints are configured as PCIe peers to the one or more endpoints connected to the memory switch through the switch ports of the memory switch, while the one or more internal endpoints are not electrically attached to a switch port of the memory switch. 11. The memory switch of claim 3, wherein the peer-to-peer fabQ communication is based on one or more of a submission queue, a scatter-gather list (SGL), or a completion queue. 10. The memory switch of claim 2, wherein the peer-to-peer fabQ communication is based on a submission queue, a scatter-gather list (SGL) and/or a completion queue. 12. The memory switch of claim 3, wherein the network port used to forward the first data packet is an Ethernet network port when the destination MAC address is outside the range of forwarding addresses. 11. The memory switch of claim 2, wherein the network port used to forward the first data packet is an Ethernet network port when the destination MAC address is outside the range of forwarding addresses. 13. The memory switch of claim 1, further comprising a cacheline exchange engine configured to: provide a data-exchange path between a pair of endpoints in the set of endpoints; and map an address space of one endpoint to an address space of another endpoint. 1. A multi-plane, multi-protocol memory switch comprising: a plurality of switch ports, the memory switch connectable to one or more root complex (RC) devices through one or more respective switch ports of the plurality of switch ports, and the memory switch connectable to a set of endpoints through a set of other switch ports of the plurality of switch ports, wherein the set includes zero or multiple endpoints; a cacheline exchange engine configured to provide a data-exchange path between two endpoints and to map an address space of one endpoint to an address space of another endpoint, wherein the cacheline exchange engine is configured to: receive a data payload from a source device; write the data payload in a first address space of the source device; map the first address space into a second address space of a destination device, wherein mapping is performed through internal endpoint memory space; and write the data payload in the second address space of the destination device; and a bulk data transfer engine configured to facilitate data-exchange between two endpoints as a source-destination data stream, one endpoint being designated a source address and another endpoint being designated a destination address, wherein facilitating the data-exchange includes bulk data transfer based on a comparison between a forwarding packet header generated based on the source and destination addresses for the data stream and a range of forwarding addresses reserved for the memory switch. 14. The memory switch of claim 13, wherein the cacheline exchange engine is further configured to: receive a data payload from a source device; write the data payload in a first address space of the source device; map the first address space into a second address space of a destination device, wherein mapping is performed through internal endpoint memory space; and write the data payload in the second address space of the destination device. 1. A multi-plane, multi-protocol memory switch comprising: a plurality of switch ports, the memory switch connectable to one or more root complex (RC) devices through one or more respective switch ports of the plurality of switch ports, and the memory switch connectable to a set of endpoints through a set of other switch ports of the plurality of switch ports, wherein the set includes zero or multiple endpoints; a cacheline exchange engine configured to provide a data-exchange path between two endpoints and to map an address space of one endpoint to an address space of another endpoint, wherein the cacheline exchange engine is configured to: receive a data payload from a source device; write the data payload in a first address space of the source device; map the first address space into a second address space of a destination device, wherein mapping is performed through internal endpoint memory space; and write the data payload in the second address space of the destination device; and a bulk data transfer engine configured to facilitate data-exchange between two endpoints as a source-destination data stream, one endpoint being designated a source address and another endpoint being designated a destination address, wherein facilitating the data-exchange includes bulk data transfer based on a comparison between a forwarding packet header generated based on the source and destination addresses for the data stream and a range of forwarding addresses reserved for the memory switch. 15. The memory switch of claim 14, wherein the internal endpoint memory space includes at least one of a base address register (BAR) memory window of an internal endpoint in PCIe communication or a CXL memory window of an internal endpoint in CXL communication. 13. The memory switch of claim 1, wherein the internal endpoint memory space includes at least one of a base address register (BAR) memory window of an internal endpoint in PCIe communication or a CXL memory window of an internal endpoint in CXL communication. 16. The memory switch of claim 14, wherein the cacheline exchange engine is further configured to pair the first address space of the source device to the second address space of the destination space via a BAR memory window of the internal endpoint. 14. The memory switch of claim 1, wherein the cacheline exchange engine is further configured to pair the first address space of the source device to the second address space of the destination space via a BAR memory window of the internal endpoint. 17. The memory switch of claim 14, wherein, to map the first address space into the second address space, the cacheline exchange engine is further configured to: perform a cacheline exchange (fabX) write operation to write the data payload in the first address space to a fabX BAR region; generate a data packet from the data payload; tunnel the data packet to a remote switch over a network connection; and transmit, by the remote switch, the data packet to the destination device. 15. The memory switch of claim 1, wherein, to map the first address space into the second address space, the cacheline exchange engine is further configured to: perform a cacheline exchange (fabX) write operation to write the data payload in the first address space to a fabX BAR region; generate a data packet from the data payload; tunnel the data packet to a remote switch over a network connection; and transmit, by the remote switch, the data packet to the destination device. 18. A method for switching data packets comprising: providing a memory switch comprising: a first plurality of switch ports configured to be connected to one or more root complex (RC) devices; a second plurality of switch ports configured to be connected to a set of endpoints; and a bulk data transfer engine configured to facilitate data-exchange between a pair of endpoints in the set of endpoints; and facilitating, by the bulk data transfer engine, data-exchange between a pair of endpoints by: identifying a destination media access control (MAC) address for the pair of endpoints; in response to the destination MAC address being within a range of forwarding addresses, transmitting a plurality of data packets for the pair of endpoints to the destination MAC address via peer-to-peer bulk data transfer (fabQ) communication; and in response to the destination MAC address being outside the range of forwarding addresses, transmitting the plurality of data packets for the pair of endpoints to a network port. 17. A method for switching data packets comprising: providing, by a cacheline exchange engine, a data-exchange path between two endpoints and to map an address space of one endpoint to an address space of another endpoint, wherein one or more root complex (RC) devices connect to a memory switch through one or more respective switch ports of a plurality of switch ports, and a set of endpoints connect to the memory switch through a set of other switch ports of the plurality of switch ports, wherein the set incudes zero or multiple endpoints, wherein providing the data-exchange path comprises: receiving a data payload from a source device; writing the data payload in a first address space of the source device; mapping the first address space into a second address space of a destination device, wherein mapping is performed through internal endpoint memory space; and writing the data payload in the second address space of the destination device; and facilitating, by a bulk data transfer engine, data-exchange between two endpoints as a source-destination data stream, one endpoint being designated a source address and another endpoint being designated a destination address, wherein facilitating the data-exchange includes bulk data transfer based on a comparison between a forwarding packet header generated based on the source and destination addresses for the data stream and a range of forwarding addresses reserved for the memory switch. 18. The method of claim 17, wherein, to facilitating, by the bulk data transfer engine, data-exchange, the method comprises: receiving first data from the source device; generating a first transmit header for the first data to form a first data packet, the first transmit header including a first forwarding header comprising a destination media access control (MAC) address; determining whether the destination MAC address included in the first forwarding header of the first data packet is within the range of forwarding addresses reserved for a memory switch; responsive to the destination MAC address being within the range of forwarding addresses, identifying a target identifier based on the destination MAC address and the range of forwarding addresses; and transmitting the first data packet to the destination device represented by the target identifier via peer-to-peer bulk data transfer (fabQ) communication; and responsive to the destination MAC address being outside the range of forwarding addresses, forwarding the first data packet to a network port for transmission. 19. The method of claim 18, wherein facilitating the data-exchange comprises: receiving first data from a source device; creating a first data packet from the first data by associating a first forwarding header with the first data packet, the first forwarding header comprising a destination media access control (MAC) address; transmitting the first data packet to a destination device based on the first forwarding header via peer-to-peer bulk data transfer (fabQ) communication when the destination MAC address included in the first forwarding header is within a range of forwarding addresses; and forwarding the first data packet to a network port for transmission when the destination MAC address is outside the range of forwarding addresses. 18. The method of claim 17, wherein, to facilitating, by the bulk data transfer engine, data-exchange, the method comprises: receiving first data from the source device; generating a first transmit header for the first data to form a first data packet, the first transmit header including a first forwarding header comprising a destination media access control (MAC) address; determining whether the destination MAC address included in the first forwarding header of the first data packet is within the range of forwarding addresses reserved for a memory switch; responsive to the destination MAC address being within the range of forwarding addresses, identifying a target identifier based on the destination MAC address and the range of forwarding addresses; and transmitting the first data packet to the destination device represented by the target identifier via peer-to-peer bulk data transfer (fabQ) communication; and responsive to the destination MAC address being outside the range of forwarding addresses, forwarding the first data packet to a network port for transmission. 20. The method of claim 19, further comprising: receiving second data; generating a second data packet from the second data; and associating the first forwarding header with the second data packet for transmitting the second data packet when the second data packet and the first data packet are from the same source device and have the same destination device. 6. The memory switch of claim 2, wherein the bulk data transfer engine is further configured to: receive second data subsequent to receiving the first data; generate a second data packet to include the second data; determine whether to generate a new forwarding header for the second data packet; and responsive to the second data packet and the first data packet being from the same source device and to the same destination device, associate the first forwarding header with the second data packet without generating the new forwarding header. Allowable Subject Matter Claims 1-20 would be allowable if the Double Patenting rejection discussed above was overcome. The following is a statement of reasons for the indication of allowable subject matter: Applicant’s arguments, see pages 8-9, filed 1/28/26, with respect to Claims 1 and 18 have been fully considered and are persuasive. The rejection of 10/28/25 has been withdrawn. All claims that are not specifically addressed are allowable due to a dependency. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure because each reference discloses methods for transmitting data between endpoints. Response to Arguments Applicant’s arguments, see pages 8-9, filed 1/28/26, with respect to Claims 1 and 18 have been fully considered and are persuasive. The rejection of 10/28/25 has been withdrawn. With regards to the Double Patenting rejection, Applicant states “Applicant respectfully submits a terminal disclaimer herewith in compliance with 37 C.F.R. § 1.321(c).” Response, page 8. However, no Terminal Disclaimer can be found in the instant application file. Accordingly, the Double Patenting rejection remains. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FAISAL M ZAMAN whose telephone number is (571)272-6495. The examiner can normally be reached Monday - Friday, 8 am - 5 pm, alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew J. Jung can be reached at 571-270-3779. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FAISAL M ZAMAN/ Primary Examiner, Art Unit 2175
Read full office action

Prosecution Timeline

Apr 18, 2024
Application Filed
Oct 24, 2025
Non-Final Rejection — §DP
Jan 28, 2026
Response Filed
Feb 13, 2026
Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578780
CIRCUIT SLEEP METHOD AND SLEEP CIRCUIT
2y 5m to grant Granted Mar 17, 2026
Patent 12572490
LINKS FOR PLANARIZED DEVICES
2y 5m to grant Granted Mar 10, 2026
Patent 12560993
POWER MANAGEMENT OF DEVICES WITH DIFFERENTIATED POWER SCALING BASED ON RELATIVE POWER BENEFIT ESTIMATION
2y 5m to grant Granted Feb 24, 2026
Patent 12561267
Multiple Independent On-chip Interconnect
2y 5m to grant Granted Feb 24, 2026
Patent 12562599
Contactless Power Feeder
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
81%
With Interview (+14.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 917 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month