Prosecution Insights
Last updated: April 19, 2026
Application No. 18/829,179

DEVICES USING CHIPLET BASED STORAGE ARCHITECTURES

Non-Final OA §103
Filed
Sep 09, 2024
Examiner
LEE, CHUN KUAN
Art Unit
2181
Tech Center
2100 — Computer Architecture & Software
Assignee
SK Hynix Inc.
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
71%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
455 granted / 669 resolved
+13.0% vs TC avg
Minimal +3% lift
Without
With
+3.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
32 currently pending
Career history
701
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
79.4%
+39.4% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 669 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . I. REJECTIONS BASED ON PRIOR ART Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-12 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chiang et al. (US Pub.: 2013/0191576) in view of Burger et al. (US Pub.: 2017/0147624) and Bhagavat et al. (US Pub.: 2020/0185367) As per claim 1, Chiang teaches/suggests a storage architecture comprising: a plurality of memory devices (e.g. associated with Fig. 1, ref. 204; Fig. 4, ref. 204); a front-end (e.g. associated with Fig. 1, ref. 104A), and configured to perform communication with a host device (e.g. associated with Fig. 1, ref. 10A, 10B); and a plurality of back-end (e.g. associated with Fig. 1, ref. 200A and Fig. 4, ref. 200A) configured to perform communication with the front-end and control at least a part of the plurality of memory devices, the plurality of back-end coupled to each other in series based on a daisy chain scheme (e.g. associated with daisy chain architecture of Fig. 4, ref. 200A) (Fig. 1; Fig. 4; [0028]-[0044]; and [0055]-[0058]). Chiang does not teach the storage architecture comprising: accelerator module being on a package substrate; chip located on the package substrate; and chips configured to perform communication with chip on the package substrate and operate with accelerator module, chips coupled accordingly. Burger teaches/suggests an architecture comprising: accelerator module (e.g. module associated with hardware accelerator such as FPGA: [0020]; [0128]); chips configured to perform communication with chip and operate with accelerator module (e.g. by combining communication between front end processor ASIC and back end processor ASIC with Chiang’s front end and plurality of back ends, the resulting combination of the references would further teach/suggest the above claimed features: [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]) ([0020]-[0022]; [0035]-[0038]; [0045]-[0058]; and [0128]). Bhagavat teaches/suggests an architecture comprising: being on a package substrate (e.g. associated with Fig. 2, ref. 140); chip (e.g. associated with Fig. 2, ref. 114, 116) located on the package substrate (e.g. associated with Fig. 2, ref. 140); and operating on the package substrate, chips coupled accordingly (Fig. 1-2; and [0019]-[0024]). It would have been obvious for one of ordinary skill in this art, before the effective filing date of the claimed invention, to include Burger’s chip interconnecting architecture and Bhagavat’s packaging architecture into Chiang’s storage architecture for the benefit of implementing a robust architecture that reduces compression time while maintaining compression quality (Burger, [0019]) and reducing warpage of encapsulated integrated circuit module (Bhagavat, [0029]) to obtain the invention as specified in claim 1. As per claim 2, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 1 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein each of the plurality of back-end chips comprises: a back-end link configured to communicate with the front-end chip; and at least one sub back-end link configured to communicate with at least one another back-end chip among the plurality of back-end chips (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0058]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]). . As per claim 3, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 2 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein the back-end link included in one of the plurality of back-end chips communicates with the front-end chip, and back-end links included in the rest of the plurality of back-end chips are not connected to the front-end chip (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0058]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]). As per claim 4, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 1 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein each of the plurality of back-end chips comprises: an operating buffer memory circuit configured to store data associated with data arithmetic by the plurality of accelerator memory devices (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0058]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]). As per claim 5, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 1 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: a host interface configured to communicate with the host device; and a plurality of front-end links configured to communication with at least a part of the plurality of back-end chips, and wherein at least one of the plurality of front-end links is in a disabled state (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0060]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]). As per claim 6, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 1 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein the plurality of back-end chips is disposed to surround the front-end chip, and located between the front-end chip and the plurality of accelerator memory devices (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0060]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]), wherein it would have been an obvious design choice to one of ordinary skilled the art to further implementing the above architecture. As per claim 7, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 1 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein the plurality of back-end chips is disposed to surround the front-end chip, and the plurality of accelerator memory devices is disposed to surround the plurality of back-end chips (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0060]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]), wherein it would have been an obvious design choice to one of ordinary skilled the art to further implementing the above architecture. As per claim 8, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 1 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture further comprising: an interconnect chip located on the package substrate, and configured to communicate with a back-end chip corresponding to a last node of a daisy chain formed by the plurality of back-end chips (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0060]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]), wherein it would have been an obvious to one of ordinary skilled the art to further implementing the above claimed features. As per claim 9, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 8 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein the interconnect chip is electrically connected to at least one of a plurality of solder balls located on a lower surface of the package substrate (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0060]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]), wherein it would have been an obvious to one of ordinary skilled the art to further implementing the above claimed features. As per claim 10, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 9 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein the plurality of accelerator memory devices is electrically disconnected from the plurality of solder balls (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0060]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]), wherein it would have been an obvious to one of ordinary skilled the art to further implementing the above claimed features. As per claim 11, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 8 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein the interconnect chip is configured to communicate with another interconnect chip which is included in a semiconductor package located outside of the storage architecture (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0060]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]), wherein it would have been an obvious to one of ordinary skilled the art to further implementing the above claimed features. As per claim 12, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 8 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein the interconnect chip is not directly connected to the plurality of accelerator memory devices (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0060]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]), wherein it would have been an obvious to one of ordinary skilled the art to further implementing the above claimed features. As per claim 16, Chiang teaches/suggests a storage architecture comprising: a front-end (e.g. associated with Fig. 1, ref. 104A), and configured to perform communication with a host device (e.g. associated with Fig. 1, ref. 10A, 10B); a plurality of back-end (e.g. associated with Fig. 1, ref. 200A and Fig. 4, ref. 200A) configured to perform communication with the front-end, and the plurality of back-end coupled to each other in series based on a daisy chain scheme (e.g. associated with daisy chain architecture of Fig. 4, ref. 200A); and a plurality of memory devices (e.g. associated with Fig. 1, ref. 204; Fig. 4, ref. 204) (Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0058]). Chiang does not teach the storage architecture comprising: chip located on a package substrate; chips disposed on the package substrate and communicate with chip, and the chips operating accordingly; and accelerator module located outside the package substrate, and operate with chips. Burger teaches/suggests an architecture comprising: chips configured to perform communication with chip and operate with chips accordingly (e.g. by combining communication between front end processor ASIC and back end processor ASIC with Chiang’s front end and plurality of back ends, the resulting combination of the references would further teach/suggest the above claimed features: [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]); and accelerator module (e.g. module associated with hardware accelerator such as FPGA: [0020]; [0128]), and operate with chips (e.g. by combining communication between front end processor ASIC and back end processor ASIC with Chiang’s front end and plurality of back ends, the resulting combination of the references would further teach/suggest the above claimed features) ([0020]-[0022]; [0035]-[0038]; [0045]-[0058]; and [0128]). Bhagavat teaches/suggests an architecture comprising: chip (e.g. associated with Fig. 2, ref. 114, 116) located on the package substrate (e.g. associated with Fig. 2, ref. 140); and module located outside the package substrate(e.g. associated with Fig. 2, ref. 140) (Fig. 1-2; [0019]-[0024]). It would have been obvious for one of ordinary skill in this art, before the effective filing date of the claimed invention, to include Burger’s chip interconnecting architecture and Bhagavat’s packaging architecture into Chiang’s storage architecture for the benefit of implementing a robust architecture that reduces compression time while maintaining compression quality (Burger, [0019]) and reducing warpage of encapsulated integrated circuit module (Bhagavat, [0029]) to obtain the invention as specified in claim 16. As per claim 17, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 16 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein each of the plurality of back-end chips comprises: a first controller configured to control an operation of the back-end chip; a second controller configured to control at least a part of the plurality of accelerator memory devices; a first buffer memory circuit configured to store data according to an operation of the first controller; and a second buffer memory circuit configured to store data associated with data arithmetic by the plurality of accelerator memory devices in response to an operation of the second controller (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0058]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]), wherein it would have been an obvious to one of ordinary skilled the art to further implementing the above claimed features. As per claim 18, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 16 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein the front-end chip comprises a first front-end link configured to communicate with at least one of the plurality of back-end chips and a second front-end link that is in a disabled state (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0058]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]), wherein it would have been an obvious to one of ordinary skilled the art to further implementing the above claimed features. As per claim 19, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 18 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture comprising: wherein one of the plurality of back-end chips comprises a back-end link connected to the front-end chip, and each of the rest of the plurality of back-end chips comprises a back-end link disconnected from the front-end chip (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0058]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]), wherein it would have been an obvious to one of ordinary skilled the art to further implementing the above claimed features. As per claim 20, Chiang, Burger, and Bhagavat teach/suggest all the claimed features of claim 16 above, where Chiang, Burger, and Bhagavat teach/suggest the storage architecture further comprising: an interconnect chip located on the package substrate, and configured to communicate with a back-end chip corresponding to a last node of a daisy chain formed by the plurality of back-end chips (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0058]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; and Bhagavat, Fig. 1-2; [0019]-[0024]), wherein it would have been an obvious to one of ordinary skilled the art to further implementing the above claimed features. Claims 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Chiang et al. (US Pub.: 2013/0191576) in view of Burger et al. (US Pub.: 2017/0147624), Bhagavat et al. (US Pub.: 2020/0185367), and Burnham (US Patent 6,597,232). As per claim 13, Chiang teaches/suggests a storage architecture comprising: a plurality of memory devices (e.g. associated with Fig. 1, ref. 204; Fig. 4, ref. 204); a front-end (e.g. associated with Fig. 1, ref. 104A), and configured to perform communication with a host device (e.g. associated with Fig. 1, ref. 10A, 10B); and a plurality of back-end (e.g. associated with Fig. 1, ref. 200A and Fig. 4, ref. 200A) configured to communication with the front-end and control at least a part of the plurality of memory devices (Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0058]). Chiang does not teach the storage architecture comprising: accelerator module being on a package substrate; chip located on the package substrate; a bridge chip disposed on the package substrate and configured to communicate with the front-end chip; and chips located on the package substrate, and configured to communicate with chip through the bridge chip and operate with accelerator module. Burger teaches/suggests an architecture comprising: accelerator module (e.g. module associated with hardware accelerator such as FPGA: [0020]; [0128]); chips configured to perform communication with chip and operate with accelerator module (e.g. by combining communication between front end processor ASIC and back end processor ASIC with Chiang’s front end and plurality of back ends, the resulting combination of the references would further teach/suggest the above claimed features: [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]) ([0020]-[0022]; [0035]-[0038]; [0045]-[0058]; and [0128]). Bhagavat teaches/suggests an architecture comprising: being on a package substrate (e.g. associated with Fig. 2, ref. 140); chip (e.g. associated with Fig. 2, ref. 114, 116) located on the package substrate (e.g. associated with Fig. 2, ref. 140) and configure to operate with chip (e.g. associated with Fig. 2, ref. 116, 114); chip disposed on the package substrate (e.g. associated with Fig. 2, ref. 140) and configured to operate with chip (e.g. associated with Fig. 2, ref. 114, 116); and located on the package substrate (e.g. associated with Fig. 2, ref. 140) and configured to operate with chip (Fig. 2); and located on the package substrate (e.g. associated with Fig. 2, ref. 140), and configured to operate with chip (Fig. 1-2; [0019]-[0024]). Burnham aches/suggests an architecture comprising: a bridge (e.g. associated with Fig. 2, ref. 240) communicate with the front-end (e.g. associated with Fig. 2, ref. 1801-18032); and configured to communicate through the bridge (Fig. 2; col. 4, l. 42 to col. 7, l. 60). It would have been obvious for one of ordinary skill in this art, before the effective filing date of the claimed invention, to include Burger’s chip interconnecting architecture, Bhagavat’s packaging architecture Burnham’s bridge architecture into Chiang’s storage architecture for the benefit of implementing a robust architecture that reduces compression time while maintaining compression quality (Burger, [0019]), reducing warpage of encapsulated integrated circuit module (Bhagavat, [0029]), increasing the operation bandwidth (Burnham, col. 5, ll. 43-45) to obtain the invention as specified in claim 13. As per claim 14, Chiang, Burger, Bhagavat, and Burnham teach/suggest all the claimed features of claim 13 above, where Chiang, Burger, Bhagavat, and Burnham teach/suggest the storage architecture comprising: wherein the bridge chip comprises: a first bridge link configured to communicate with the front-end chip; and a plurality of second bridge links configured to communicate with the plurality of back-end chips (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0058]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; Bhagavat, Fig. 1-2; [0019]-[0024]; and Burnham, Fig. 2; col. 4, l. 42 to col. 7, l. 60), wherein it would have been an obvious to one of ordinary skilled the art to further implementing the above claimed features. As per claim 15, Chiang, Burger, Bhagavat, and Burnham teach/suggest all the claimed features of claim 13 above, where Chiang, Burger, Bhagavat, and Burnham teach/suggest the storage architecture further comprising: an interconnect chip configured to communicate with the bridge chip, and not directly connected to the plurality of back-end chips (Chiang, Fig. 1; Fig. 4; [0028]-[0044]; [0055]-[0058]; Burger, [0020]-[0022]; [0035]-[0038]; [0045]-[0058]; [0128]; Bhagavat, Fig. 1-2; [0019]-[0024]; and Burnham, Fig. 2; col. 4, l. 42 to col. 7, l. 60), wherein it would have been an obvious to one of ordinary skilled the art to further implementing the above claimed features. II. CLOSING COMMENTS CONCLUSION STATUS OF CLAIMS IN THE APPLICATION The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): CLAIMS REJECTED IN THE APPLICATION Per the instant office action, claims 1-20 have received a first action on the merits and are subject of a first action non-final. DIRECTION OF FUTURE CORRESPONDENCES Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHUN KUAN LEE whose telephone number is (571)272-0671. The examiner can normally be reached Monday-Friday. IMPORTANT NOTE If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Idriss Alrobaye can be reached on (571) 270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHUN KUAN LEE/Primary Examiner Art Unit 2181 March 17, 2026
Read full office action

Prosecution Timeline

Sep 09, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602270
KV-CACHE STREAMING FOR IMPROVED PERFORMANCE AND FAULT TOLERANCE IN GENERATIVE MODEL SERVING
2y 5m to grant Granted Apr 14, 2026
Patent 12596659
METHODS, DEVICES AND SYSTEMS FOR HIGH SPEED TRANSACTIONS WITH NONVOLATILE MEMORY ON A DOUBLE DATA RATE MEMORY BUS
2y 5m to grant Granted Apr 07, 2026
Patent 12579080
OUTPUT METHOD AND DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12579089
DATA PROCESSING METHOD, APPARATUS AND SYSTEM BASED ON PARA-VIRTUALIZATION DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12554540
EVENT PROCESSING BY HARDWARE ACCELERATOR
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
71%
With Interview (+3.1%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 669 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month