Prosecution Insights
Last updated: April 20, 2026
Application No. 17/852,093

Techniques For Sharing Memory Interface Circuits Between Integrated Circuit Dies

Final Rejection §103
Filed
Jun 28, 2022
Examiner
LEWIS-TAYLOR, DAYTON A.
Art Unit
2181
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
568 granted / 701 resolved
+26.0% vs TC avg
Minimal +3% lift
Without
With
+3.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
24 currently pending
Career history
725
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
22.1%
-17.9% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 701 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1-20 are pending. 3. This office action is in response to the Applicant’s communication filed 02/11/2026 in response to PTO Office Action mailed 12/31/2025. The Applicant’s remarks and amendments to the claims and/or the specification were considered with the results that follow. Response to Arguments 4. Applicant’s argument with respect to amended independent claims has been fully considered but they are not persuasive. Applicant’s arguments are summarized as: 1) “Claim 1 of the present application recites in part "a second integrated circuit die comprising a second die-to-die interface circuit and a compute circuit that performs computations for the processing integrated circuit die, wherein the first and the second die-to-die interface circuits are coupled together, and wherein the compute circuit is coupled to exchange information with the first memory interface circuit through the first and the second die-to-die interface circuits." At least these features of claim 1 of the present application are not disclosed in or rendered obvious by the cited references, either taken alone or in combination.” 2) “Independent claim 16 of the present application recites in part "wherein the accelerator integrated circuit die comprises a second die-to-die interface circuit that is coupled to the first die-to-die interface circuit, and wherein the accelerator integrated circuit die is coupled to exchange information with the first and the second memory interface circuits through the first and the second die-to-die interface circuits." At least these features of claim 16 of the present application are not disclosed in or rendered obvious by the cited references, either taken alone or in combination.” As per argument 1, in response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Leong’s IC die 203’ is a mere duplication of parts, in this case IC die 203, that with the combination of Gutala’s teachings of coprocessor IC die that performs computations (Gutala – par. [0074]) would result in the ability to implement the claimed limitations. As per argument 2, in response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Leong’s IC die 203’ is a mere duplication of parts, in this case IC die 203, that with the combination of Gutala’s teachings of coprocessor IC die that performs computations as part of a function being accelerated (Gutala – par. [0074]) would result in the ability to implement the claimed limitations. Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claims 1-3, 5-7, 9-13, 15-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Leong (US Pub. No. 2019/0181865 A1 hereinafter “Leong”) in view of Gutala et al. (US Pub. No. 2019/0012116 A1 hereinafter “Gutala”). Referring to claim 1, Leong discloses a circuit system (Leong – Fig. 2 & par. [0024] disclose a multichip package 200.) comprising: a processing integrated circuit die comprising a first die-to-die interface circuit and a first memory interface circuit (Leong – Fig. 2 & par. [0024-0026] disclose an IC die 203 which could be a CPU. Par. [0030] disclose an external input-output (IO) blocks may include an HSI block 312 that supports communications with other dies within package 200 (e.g., IC die 203′). As shown in FIG. 2, a communications bus such as communications bus 313 may couple HSI block 312 in IC die 203 to IC die 203′. Par. [0027] discloses IC die 203 may also include on-package interconnect circuitry such as universal interface block (UIB) 204 for communicating with on-package components such as memory die 206 via bus 205.); and a second integrated circuit die comprising a second die-to-die interface circuit, wherein the first and the second die-to-die interface circuits are coupled together (Leong – Fig. 2 & par. [0024-0026, 0030] discloses IC die 203’. Par. [0030] disclose an external input-output (IO) blocks may include an HSI block 312 that supports communications with other dies within package 200 (e.g., IC die 203′). As shown in FIG. 2, a communications bus such as communications bus 313 may couple HSI block 312 in IC die 203 to IC die 203′. In this illustrative example, external IO block 312 may support GPIO, LVDS, or other suitable interfaces to communicate with IC die 203′. ), and wherein the compute circuit is coupled to exchange information with the first memory interface circuit through the first and the second die-to-die interface circuits (Leong – Fig. 2 & par. [0024-0026, 0030] discloses IC die 203’ which could be a GPU coupled to exchange information with the UIB 204 through HSI 312 and communication bus 313.). Leong fails to explicitly disclose a second integrated circuit die comprising a compute circuit that performs computations for the processing integrated circuit die. Gutala discloses an integrated circuit die comprising a compute circuit that performs computations for the processing integrated circuit die (Gutala – Par. [0074] discloses in operation 1101, logic circuitry in one or more logic sectors in a coprocessor IC die (e.g., one or more of logic sectors 410) generates an intermediate result of a multi-part computation performed as part of a function being accelerated for a host processor (e.g., host processor 302).). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include Gutala’s teachings with Leong’s techniques for the benefit of configuring or reconfiguring the programmable integrated circuit as an accelerator circuit to efficiently perform parallel processing tasks (Gutala – Par. [0025]). Referring to claim 2, Leong and Gutala disclose the circuit system of claim 1, wherein the second integrated circuit die further comprises an accelerator circuit that accelerates functions for the processing integrated circuit die, and wherein the accelerator circuit (Gutala – Par. [0074] discloses in operation 1101, logic circuitry in one or more logic sectors in a coprocessor IC die (e.g., one or more of logic sectors 410) generates an intermediate result of a multi-part computation performed as part of a function being accelerated for a host processor (e.g., host processor 302). Par. [0037] discloses as shown in FIG. 3, host processor 302 may be coupled to a coprocessor circuit 310 via path 312. Coprocessor circuit 310 is in an integrated circuit (IC) die and is also referred to herein as coprocessor IC die 310. Coprocessor circuit 310 may be, for example, a programmable integrated circuit such as IC 10 of FIG. 1. Alternatively, multiple coprocessor or accelerator circuits may be in a programmable integrated circuit. Host processor 302 is in an integrated circuit die that is separate from the coprocessor integrated circuit die 310. Coprocessor circuit 310 functions as an accelerator circuit for host processor 302. As an accelerator circuit, coprocessor circuit 310 may include various processing nodes (e.g., processing cores, processor cores) such as cores P1-P4 to help accelerate the performance of host processor 302.) is coupled to exchange information with the first memory interface circuit through the first and the second die- to-die interface circuits (Leong – Fig. 2 & par. [0024-0026, 0030] discloses IC die 203’ which could be a GPU coupled to exchange information with the UIB 204 through HSI 312 and communication bus 313.). Referring to claim 3, Leong and Gutala disclose the circuit system of claim 1 further comprising: an accelerator integrated circuit die comprising a third die-to-die interface circuit, wherein the processing integrated circuit die further comprises a fourth die-to-die interface circuit, wherein the third and the fourth die-to-die interface circuits are coupled together, and wherein the accelerator integrated circuit die (Gutala – Par. [0074] discloses in operation 1101, logic circuitry in one or more logic sectors in a coprocessor IC die (e.g., one or more of logic sectors 410) generates an intermediate result of a multi-part computation performed as part of a function being accelerated for a host processor (e.g., host processor 302). Par. [0037] discloses as shown in FIG. 3, host processor 302 may be coupled to a coprocessor circuit 310 via path 312. Coprocessor circuit 310 is in an integrated circuit (IC) die and is also referred to herein as coprocessor IC die 310. Coprocessor circuit 310 may be, for example, a programmable integrated circuit such as IC 10 of FIG. 1. Alternatively, multiple coprocessor or accelerator circuits may be in a programmable integrated circuit. Host processor 302 is in an integrated circuit die that is separate from the coprocessor integrated circuit die 310. Coprocessor circuit 310 functions as an accelerator circuit for host processor 302. As an accelerator circuit, coprocessor circuit 310 may include various processing nodes (e.g., processing cores, processor cores) such as cores P1-P4 to help accelerate the performance of host processor 302.) is coupled to exchange information with the first memory interface circuit through the third and the fourth die-to- die interface circuits (Leong – Fig. 2 & par. [0024-0026, 0030] discloses IC die 203’ which could be a GPU coupled to exchange information with the UIB 204 through HSI 312 and communication bus 313.). Referring to claim 5, Leong and Gutala disclose the circuit system of claim 1, wherein the processing integrated circuit die further comprises a second memory interface circuit, and wherein the compute circuit is coupled to exchange information with the second memory interface circuit through the first and the second die-to-die interface circuits (Leong – Fig. 2 & par. [0024-0026, 0030] discloses IC die 203’ which could be a GPU coupled to exchange information with the UIB 204 through HSI 312 and communication bus 313. Par. [0037] discloses one or more of blocks 204, 208, 312 in IC die 203 in FIG. 2.). Referring to claim 6, Leong and Gutala disclose the circuit system of claim 1, wherein the first memory interface circuit is configured to exchange signals with a memory device external to the processing integrated circuit die (Leong – Fig. 2 shows a UIB 204 to exchange signals with a high bandwidth memory 206 external to the IC die 203.). Referring to claim 7, Leong and Gutala disclose the circuit system of claim 1, wherein the processing integrated circuit die is a programmable logic integrated circuit (Leong – Fig. 2 & par. [0024-0026] disclose an IC die 203 which could be a FPGA.). Referring to claim 9, Leong and Gutala disclose the circuit system of claim 1, wherein the first and the second die- to-die interface circuits are coupled together through interconnects in one of an interposer, a package substrate, or an interconnection bridge in the circuit system (Gutala – Par. [0056] discloses IC die 522 is coupled to coprocessor IC die 310 through conductors in interconnection bridge 508.). Referring to claim 10, note the rejections of claim 1 above. The Instant Claim recites substantially same limitations as the above-rejected and is therefore rejected under same prior-art teachings. Referring to claim 11, note the rejections of claim 2 above. The Instant Claim recites substantially same limitations as the above-rejected and is therefore rejected under same prior-art teachings. Referring to claim 12, note the rejections of claim 3 above. The Instant Claim recites substantially same limitations as the above-rejected and is therefore rejected under same prior-art teachings. Referring to claim 13, note the rejections of claims 5 and 6 above. The Instant Claim recites substantially same limitations as the above-rejected and is therefore rejected under same prior-art teachings. Referring to claim 15, note the rejections of claims 3 and 6 above. The Instant Claim recites substantially same limitations as the above-rejected and is therefore rejected under same prior-art teachings. Referring to claim 16, Leong discloses a circuit system (Leong – Fig. 2 & par. [0024] disclose a multichip package 200.) comprising: a processing integrated circuit die comprising a first die-to-die interface circuit, a first memory interface circuit, and a second memory interface circuit, wherein the first and the second memory interface circuits are adjacent to opposite sides of the processing integrated circuit die (Leong – Fig. 2 & par. [0024-0026] disclose an IC die 203 which could be a CPU. Par. [0030] disclose an external input-output (IO) blocks may include an HSI block 312 that supports communications with other dies within package 200 (e.g., IC die 203′). As shown in FIG. 2, a communications bus such as communications bus 313 may couple HSI block 312 in IC die 203 to IC die 203′. Par. [0027] discloses IC die 203 may also include on-package interconnect circuitry such as universal interface block (UIB) 204 for communicating with on-package components such as memory die 206 via bus 205. Par. [0037] discloses one or more of blocks 204 (universal interface block (UIB)), 208, 312 in IC die 203 in FIG. 2.); and an accelerator integrated circuit die (Leong – Fig. 2 & par. [0024-0026] disclose an IC die 203’ which could be a GPU.), wherein the accelerator integrated circuit die comprises a second die-to-die interface circuit that is coupled to the first die-to-die interface circuit, and wherein the accelerator integrated circuit die is coupled to exchange information with the first and the second memory interface circuits through the first and the second die-to- die interface circuits (Leong – Fig. 2 & par. [0024-0026, 0030] discloses an external input-output (IO) blocks may include an HSI block 312 that supports communications with other dies within package 200 (e.g., IC die 203′). As shown in FIG. 2, a communications bus such as communications bus 313 may couple HSI block 312 in IC die 203 to IC die 203′. IC die 203’ which could be a GPU coupled to exchange information with the UIB 204 through HSI 312 and communication bus 313.). Leong fails to explicitly disclose an accelerator integrated circuit die that accelerates functions for the processing integrated circuit die. Gutala discloses an accelerator integrated circuit die that accelerates functions for the processing integrated circuit die (Gutala – Par. [0074] discloses in operation 1101, logic circuitry in one or more logic sectors in a coprocessor IC die (e.g., one or more of logic sectors 410) generates an intermediate result of a multi-part computation performed as part of a function being accelerated for a host processor (e.g., host processor 302).). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include Gutala’s teachings with Leong’s techniques for the benefit of configuring or reconfiguring the programmable integrated circuit as an accelerator circuit to efficiently perform parallel processing tasks (Gutala – Par. [0025]). Referring to claim 17, Leong and Gutala disclose the circuit system of claim 16, wherein the accelerator integrated circuit die and the processing integrated circuit die are mounted side-by-side in the circuit system (Leong – Fig. 2 discloses the IC die 203’ and IC die 203 are mounted side-by-side.). Referring to claim 20, Leong and Gutala disclose the circuit system of claim 16, wherein each of the first and the second die-to-die interface circuits comprises input driver circuits and output driver circuits configured to transmit signals according to an interconnect protocol (Leong – Fig. 2 & par. [0030] disclose external input-output (IO) blocks may include an HSI block 312 that supports communications with other dies within package 200 (e.g., IC die 203′). As shown in FIG. 2, a communications bus such as communications bus 313 may couple HSI block 312 in IC die 203 to IC die 203′. In this illustrative example, external IO block 312 may support GPIO, LVDS, or other suitable interfaces to communicate with IC die 203′.). 8. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Leong in view of Gutala, and further in view of Sano (US Pub. No. 2003/0097467 A1 hereinafter “Sano”). Referring to claim 4, Leong and Gutala disclose the circuit system of claim 1, however, fail to explicitly disclose wherein the compute circuit is configured to set up routing tables for connections for routing packets of data within the circuit system. Sano discloses the compute circuit is configured to set up routing tables for connections for routing packets of data within the circuit system (Sano – Par. [0079] discloses one or more coprocessors 312A-312B may be configured to perform the lookup packet processing function (in which the packet is looked up in various routing tables that may be programmed into the line card 302A).). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include Sano’s teachings with Leong and Gutala’s techniques for the benefit of a coprocessor having circuitry designed to perform a specified function on an input to produce an output (Sano – Par. [0079]). Referring to claim 14, note the rejections of claim 4 above. The Instant Claim recites substantially same limitations as the above-rejected and is therefore rejected under same prior-art teachings. 9. Claims 8 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Leong in view of Gutala, and further in view of Huang et al. (US Pub. No. 2019/0129874 A1 hereinafter “Huang”). Referring to claim 8, Leong and Gutala disclose the circuit system of claim 1, wherein the processing integrated circuit die is configured (Leong – Fig. 2 & par. [0024-0026] disclose an IC die 203 which could be a CPU.), however fail to explicitly disclose the processing integrated circuit die is configured to accelerate storage virtualization functions and network virtualization functions. Huang discloses to accelerate storage virtualization functions and network virtualization functions (Huang – Par. [0095] discloses the NFVI 130 includes computing hardware 112, storage hardware 114, network hardware 116, acceleration hardware 115, a virtualization layer, virtual computing 110, virtual storage 118, a virtual network 120, and virtual acceleration 123. The NFV management and orchestration system 101 is configured to perform monitoring and management on the virtualized network function 108 and the NFV infrastructure layer 130.). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include Huang’s teachings with Leong and Gutala’s techniques for the benefit of the acceleration resource being selected according to the service acceleration resource scheduling policy, so that a specific requirement such as latency sensitivity of a service can be met, thereby reducing a latency service and improving service performance (Huang – Abstract). Referring to claim 19, note the rejections of claim 8 above. The Instant Claim recites substantially same limitations as the above-rejected and is therefore rejected under same prior-art teachings. 10. Claims 18 is rejected under 35 U.S.C. 103 as being unpatentable over Leong in view of Gutala, and further in view of Pappu et al. (US Pub. No. 2019/0033368 A1 hereinafter “Pappu”). Referring to claim 18, Leong and Gutala disclose the circuit system of claim 16, however, fail to explicitly disclose wherein the accelerator integrated circuit die and the processing integrated circuit die are vertically stacked in the circuit system. Pappu discloses the accelerator integrated circuit die and the processing integrated circuit die are vertically stacked in the circuit system (Pappu – Fig. 1 & par. [0023] disclose computing die 110 couples to graphics accelerator die 160 via an interconnect 155. In an embodiment, interconnect 155 may be implemented as an in-package interconnect, such as an embedded interconnect bridge, a hyper-chip technology to couple stacked die.). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include Pappu’s teachings with Leong and Gutala’s techniques for the benefit of enabling functional and debug testing of integrated circuits (ICs) to readily identify specific locations/components that suffer failures, errors and so forth (Pappu – Par. [0010]). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAYTON LEWIS-TAYLOR whose telephone number is (571)270-7754. The examiner can normally be reached on Monday through Thursday, 8AM TO 4PM, EASTERN TIME. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Idriss Alrobaye, can be reached on 571-270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAYTON LEWIS-TAYLOR/ Examiner, Art Unit 2181 /Farley Abad/ Primary Examiner, Art Unit 2181
Read full office action

Prosecution Timeline

Jun 28, 2022
Application Filed
Aug 15, 2022
Response after Non-Final Action
Dec 26, 2025
Non-Final Rejection — §103
Feb 11, 2026
Response Filed
Mar 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585491
PROCESSING OF INTERRUPTS
2y 5m to grant Granted Mar 24, 2026
Patent 12585610
COMPUTING SYSTEM, PCI DEVICE MANAGER AND INITIALIZATION METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12578901
CLOCK DOMAIN CROSSING
2y 5m to grant Granted Mar 17, 2026
Patent 12572496
HOST FABRIC ADAPTER WITH FABRIC SWITCH
2y 5m to grant Granted Mar 10, 2026
Patent 12572497
DETECTION OF A STUCK DATA LINE OF A SERIAL DATA BUS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
84%
With Interview (+3.4%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 701 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month