Prosecution Insights
Last updated: April 19, 2026
Application No. 18/457,171

COMPUTATIONAL STORAGE FOR AN ENERGY-EFFICIENT DEEP NEURAL NETWORK TRAINING SYSTEM

Non-Final OA §101§103
Filed
Aug 28, 2023
Examiner
MAMILLAPALLI, PAVAN
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
SK Hynix Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
597 granted / 743 resolved
+25.3% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
21 currently pending
Career history
764
Total Applications
across all art units

Statute-Specific Performance

§101
24.1%
-15.9% vs TC avg
§103
51.7%
+11.7% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 743 resolved cases

Office Action

§101 §103
DETAILED ACTION This Office Action is in response to A pplication # 1 8 / 457 , 171 filed on August 28, 2023 in which claims 1-20 are presented for examination . Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S .C. 120, 121, 365(c), or 386(c) is acknowledged. The Provisional Application# 63/415,476 filed on October 12, 2022. Status of claims Claims 1-20 are pending, of which claims 1-20 are rejected under 35 U.S.C. 101 and also claims 1-20 are rejected under 35 U.S.C. 103. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101. because the claims are directed to an abstract idea; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.) Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—Claims 1-20 recite a method and system respectively. The analysis of claims 1 and 1 1 are as follows: Step 2A, prong one: Does claims 1 and 1 1 recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations of “ a dynamic random access memory (DRAM) configured to buffer training data ; a central processing unit (CPU) coupled to the DRAM and configured to downsample the training data and provide the DRAM with the downsampled training data ; a computational storage consisting of a solid-state drive (SSD) and field-programmable gate array (FPGA) and configured to perform dimensionality reduction on the downsampled training data to generate training data batches ; and a graphic processing unit (GPU) configured to perform training on the training data batches ” as drafted, are mental steps based on various processes can be performed in a human mind of creating data into batches for training in arranging hardware (acts of thinking, decision making). These limitations, therefore fall within the human mind processes group and with a pen & paper. Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application as just stated as related to the technical field of computer science . Although the claim recites that the recited functionality includes “method ” and “ system ”, these computer components are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using generic computer component. In addition, the claim recites “ a dynamic random access memory (DRAM) configured to buffer training data ; a central processing unit (CPU) coupled to the DRAM and configured to downsample the training data and provide the DRAM with the downsampled training data ; a computational storage consisting of a solid-state drive (SSD) and field-programmable gate array (FPGA) and configured to perform dimensionality reduction on the downsampled training data to generate training data batches ; and a graphic processing unit (GPU) configured to perform training on the training data batches ” are mere gathering data and applying process steps (i.e., creating data for training ); the computers that perform those functions and the mental steps are recited at a high level of generality that do not impose a meaningful limitation on the judicial exception and are insufficient to integrate the mental steps into a practical application. Although the claim recites the additional functionality “ perform dimensionality reduction on the downsampled training data to generate training data batches “, the gathering and determining are also recited at a high level of generality and merely generally link to respective technological environments (e.g., training model) and therefore likewise amounts to no more than a mere instructions to apply the exception using generic computer components and is insufficient to integrate the steps into a practical application. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No— The recitation in the preamble is insufficient to transform a judicial exception to a patentable invention because the preamble elements are recited at a high level of generality that simply links to a field of use, see MPEP 2106.05(h). The claimed extra-solution of operation based on performing reduction in dimensionality of the data into batches is acknowledged to be well-understood, routine, conventional activity (see, e.g., court recognized WURC examples in MPEP 2106.05(d)(II)(i) . Similarly, the gathering and generating are also recited at a high level of generality and merely generally link to respective technological environments. The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components . Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Taken alone, their additional elements do not amount to significantly more than the above- identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. For the reasons above, claims 1 and 11 are rejected as being directed to non-patentable subject matter under §101. The analysis of claims 2- 10 and 1 2 -20 are as follows: Step 2A, prong one: Does claims 2- 10 and 1 2 -20 recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations of “ Claims 2 and 12 recites, wherein the dimensionality reduction includes random projection. Claims 3 and 13 recites, wherein the computational storage provides the GPU with the training data batches through a peer-to-peer direct memory access (P2P-DMA) operation. Claims 4 and 14 recites, wherein the computational storage includes multiple computing units, each computing unit including: buffer blocks configured to store input tiles of the downsampled training data and an output tile of the training data batches; and a digital signal processing (DSP) unit configured to multiply and add the input tiles to generate the output tile. Claims 5 and 15 recites, wherein the buffer blocks store two of the input tiles. Claims 6 and 16 recites, wherein the input tiles are double-buffered simultaneously by the buffer blocks. Claims 7 and 17 recites, wherein a data access pattern of the two input tiles is sequential. Claims 8 and 18 recites, wherein the input tiles have a tiled data format, which are reordered from a row-major layout to a data layout for input matrices where the input tiles are in a contiguous region of memory. Claims 9 and 19 recites, wherein the downsampled training data include data processed through image resize, data argumentation and/or dimension reshape for the training data. Claims 10 and 20 recites, wherein the training data is partitioned and then buffered in the DRAM ” as drafted, are mental steps based on various processes can be performed in a human mind of creating data into batches for training in arranging hardware (acts of thinking, decision making). These limitations, therefore fall within the human mind processes group and with a pen & paper. Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application as just stated as related to the technical field of computer science . Although the claim recites that the recited functionality includes “method ” and “system”, these computer components are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using generic computer component. In addition, the claim recites “ Claims 2 and 12 recites, wherein the dimensionality reduction includes random projection. Claims 3 and 13 recites, wherein the computational storage provides the GPU with the training data batches through a peer-to-peer direct memory access (P2P-DMA) operation. Claims 4 and 14 recites, wherein the computational storage includes multiple computing units, each computing unit including: buffer blocks configured to store input tiles of the downsampled training data and an output tile of the training data batches; and a digital signal processing (DSP) unit configured to multiply and add the input tiles to generate the output tile. Claims 5 and 15 recites, wherein the buffer blocks store two of the input tiles. Claims 6 and 16 recites, wherein the input tiles are double-buffered simultaneously by the buffer blocks. Claims 7 and 17 recites, wherein a data access pattern of the two input tiles is sequential. Claims 8 and 18 recites, wherein the input tiles have a tiled data format, which are reordered from a row-major layout to a data layout for input matrices where the input tiles are in a contiguous region of memory. Claims 9 and 19 recites, wherein the downsampled training data include data processed through image resize, data argumentation and/or dimension reshape for the training data. Claims 10 and 20 recites, wherein the training data is partitioned and then buffered in the DRAM ” are mere gathering data and applying process steps (i.e., creating data for training); the computers that perform those functions and the mental steps are recited at a high level of generality that do not impose a meaningful limitation on the judicial exception and are insufficient to integrate the mental steps into a practical application. Although the claim recites the additional functionality “ perform dimensionality reduction on the downsampled training data to generate training data batches “, the gathering and determining are also recited at a high level of generality and merely generally link to respective technological environments (e.g., training model) and therefore likewise amounts to no more than a mere instructions to apply the exception using generic computer components and is insufficient to integrate the steps into a practical application. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No— The recitation in the preamble is insufficient to transform a judicial exception to a patentable invention because the preamble elements are recited at a high level of generality that simply links to a field of use, see MPEP 2106.05(h). The claimed extra-solution of operation based on performing reduction in dimensionality of the data into batches is acknowledged to be well-understood, routine, conventional activity (see, e.g., court recognized WURC examples in MPEP 2106.05(d)(II)(i) . Similarly, the gathering and generating are also recited at a high level of generality and merely generally link to respective technological environments. The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components . Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Taken alone, their additional elements do not amount to significantly more than the above- identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. For the reasons above, claims 2-10 and 1 2 -20 are rejected as being directed to non-patentable subject matter under §101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Heaton et al. US 11,875.247 B 1 (hereinafter ‘ Heaton ’) in view of Theodorou et al. US 20 2 4 / 0105292 A1 (hereinafter ‘ Theodorou ’). As per claim 1, Heaton disclose, A training system comprising ( Heaton: Col 8 Lines 52-53: disclose training a neural network model in which the weight values ) : a dynamic random access memory (DRAM) configured to buffer training data ( Heaton: Col 11 Lines 36-38: disclose DRAM, and may provide additional storage capacity for the neural network acceleration engine and Col 16 Lines 65-66 and Fig. 7: disclose input data and/or program code for the accelerators 702 a -702 n can be stored in the DRAM ) ; a central processing unit (CPU) coupled to the DRAM and configured to downsample the training data and provide the DRAM with the downsampled training data ( Heaton: Fig. 7 and Col 18 Lines 14-20: disclose chip interconnect 720. The chip interconnect 720 primarily includes wiring for routing data between the components of the acceleration engine 700. In some cases, the chip interconnect 720 can include a minimal amount of logic, such as multiplexors to control the direction of data, flip-flops for handling clock domain crossings, and timing logic . Examiner equates chip interconnect to CPU as chip interconnect has built in amount of logic ) ; and a graphic processing unit (GPU) ( Heaton: Col 3 Lines 32-35: disclose the acceleration engine 112 can be a graphics processing unit ( GPU ), and may be optimized to perform the computations needed for graphics rendering ) configured to perform training on the training data batches ( Heaton: Col 11 Lines 12-14: disclose neural network computations such as matrix multiplication, and the batches of input data can be tensors or feature maps ) . It is noted, however, Heaton d id not specifically detail the aspects of a computational storage consisting of a solid-state drive (SSD) and field-programmable gate array (FPGA) and configured to perform dimensionality reduction on the downsampled training data to generate training data batches as recited in claim 1. On the other hand , Theodorou achieved the aforementioned limitations by providing mechanisms of a computational storage consisting of a solid-state drive (SSD) ( Theodorou : paragraph 0 209 : disclose Solid-State Drive (SSD) ) and field-programmable gate array (FPGA) ( Theodorou : paragraph 0 177 : disclose a field programmable gate array ( FPGA ) ) and configured to perform dimensionality reduction on the downsampled training data to generate training data batches ( Theodorou : paragraph 0046: disclose GANs struggle with high dimen sional, sparse data, causing existing synthetic EHR approaches to produce relatively low- dimen sional data through the aggregation of visits, combination of codes, removal of rare codes, and/or other dimen sionality reductions ) . Heaton and Theodorou are analogous art because they are from the “same field of endeavor” and both from the same “problem-solving area”. Namely, they are both from the field of “ Machine Learning System”. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the systems of Heaton and Theodorou because they are both directed to machine learning system and both are from the same field of endeavor. The skilled person would therefore regard it as a normal option to include the restriction features of Theodorou with the method described by Heaton in order to solve the problem posed. The motivation for doing so would have been to provide the necessary amount of utility for real-world use, there is a need for a generative AI model that can produce suitable high-dimensional synthesized EHR data ( Theodorou : paragraph 0016 ) . Therefore, it would have been obvious to combine Theodorou with Heaton t o obtain the invention as specified in instant claim 1. A s per claim 2 , most of the limitations of this claim have been noted in the rejection of claim 1 above. It is noted, however, Heaton d id not specifically detail the aspects of wherein the dimensionality reduction includes random projection as recited in claim 2 . On the other hand , Theodorou achieved the aforementioned limitations by providing mechanisms of wherein the dimensionality reduction includes random projection ( Theodorou : paragraph 00 97: disclose r andom or pseudo-random sampling throughout the generation process may be used to add diversity, as sampling the same sequence from the same starting point will yield different results ) . A s per claim 3 , most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Heaton disclose, wherein the computational storage provides the GPU with the training data batches through a peer-to-peer direct memory access (P2P-DMA) operation ( Heaton: Fig. 7 Elements 746a and 746d: disclose a Peer to Peer DMA operation ) . A s per claim 4 , most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Heaton disclose, wherein the computational storage includes multiple computing units, each computing unit including: buffer blocks configured to store input tiles of the downsampled training data and an output tile of the training data batches; and a digital signal processing (DSP) unit configured to multiply and add the input tiles to generate the output tile ( Heaton: Fig. 7 : disclose multiple computing units and I/O Device Element 732 for output ) . A s per claim 5 , most of the limitations of this claim have been noted in the rejection of claim s 1 and 4 above. In addition, Heaton disclose, wherein the buffer blocks store two of the input tiles ( Heaton: Col 5 Lines 49-52: disclose multiple processing engines arranged in a matrix of rows and columns to perform computations used in neural networks such as integration, convolution, correlation, and/or matrix multiplication ) . A s per claim 6 , most of the limitations of this claim have been noted in the rejection of claim s 1 , 4 and 5 above. In addition, Heaton disclose, wherein the input tiles are double-buffered simultaneously by the buffer blocks ( Heaton: Col 5 Lines 52-54: disclose State buffer can be used to temporarily store data for loading into processing engine array ) . A s per claim 7 , most of the limitations of this claim have been noted in the rejection of claim s 1 and 4 above. In addition, Heaton disclose, wherein a data access pattern of the two input tiles is sequential ( Heaton: Col 10 Lines 59-60: disclose set of shared data can be similarly loaded into subsequent accelerators in a sequence ) . A s per claim 8 , most of the limitations of this claim have been noted in the rejection of claim s 1 and 4 above. In addition, Heaton disclose, wherein the input tiles have a tiled data format, which are reordered from a row-major layout to a data layout for input matrices where the input tiles are in a contiguous region of memory ( Heaton: Col 5 Lines 49-52: disclose multiple processing engines arranged in a matrix of rows and columns to perform computations used in neural networks such as integration, convolution, correlation, and/or matrix multiplication ) . A s per claim 9 , most of the limitations of this claim have been noted in the rejection of claim s 1 and 4 above. It is noted, however, Heaton d id not specifically detail the aspects of wherein the downsampled training data include data processed through image resize, data argumentation and/or dimension reshape for the training data as recited in claim 9 . On the other hand , Theodorou achieved the aforementioned limitations by providing mechanisms of wherein the downsampled training data include data processed through image resize, data argumentation and/or dimension reshape for the training data ( Theodorou : paragraph 0046: disclose GANs struggle with high dimen sional, sparse data, causing existing synthetic EHR approaches to produce relatively low- dimen sional data through the aggregation of visits, combination of codes, removal of rare codes, and/or other dimen sionality reductions ) . A s per claim 10 , most of the limitations of this claim have been noted in the rejection of claim s 1 and 4 above. It is noted, however, Heaton d id not specifically detail the aspects of wherein the training data is partitioned and then buffered in the DRAM as recited in claim 10 . On the other hand , Theodorou achieved the aforementioned limitations by providing mechanisms of wherein the training data is partitioned and then buffered in the DRAM ( Theodorou : paragraph 0 102 : disclose generate discrete versions of continuous variables, such as lab values and temporal gaps, the range of each variable is divided into a plurality of “buckets” ) . As per claim 1 1 , Heaton disclose, A method for operating a training system ( Heaton: Col 8 Lines 52-53: disclose training a neural network model in which the weight values ) , the method comprising: remaining limitations in this claim 11 are similar to the limitations in claim 1. Therefore, examiner rejects these remaining limitations under the same rationale as limitations rejected under claim 1. As per claim 12 , limitations of this claim are similar to claim 2 . Therefore, examiner rejects claim 12 limitations under the same rationale as claim 2. As per claim 1 3 , limitations of this claim are similar to claim 3 . Therefore, examiner rejects claim 1 3 limitations under the same rationale as claim 3. As per claim 1 4 , limitations of this claim are similar to claim 4 . Therefore, examiner rejects claim 1 4 limitations under the same rationale as claim 4. As per claim 1 5 , limitations of this claim are similar to claim 5 . Therefore, examiner rejects claim 1 5 limitations under the same rationale as claim 5. As per claim 1 6 , limitations of this claim are similar to claim 6 . Therefore, examiner rejects claim 1 6 limitations under the same rationale as claim 6. As per claim 1 7 , limitations of this claim are similar to claim 7 . Therefore, examiner rejects claim 1 7 limitations under the same rationale as claim 7. As per claim 1 8 , limitations of this claim are similar to claim 8 . Therefore, examiner rejects claim 1 8 limitations under the same rationale as claim 8 . As per claim 1 9 , limitations of this claim are similar to claim 9 . Therefore, examiner rejects claim 1 9 limitations under the same rationale as claim 9 . As per claim 20 , limitations of this claim are similar to claim 10 . Therefore, examiner rejects claim 20 limitations under the same rationale as claim 10 . Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Pub. US 2022 / 0405005 A1 disclose “ Three Dimensional Circuit Systems And Methods Having Memory Hierarchies ” US Pub. US 2015 / 0370697 A1 disclose “ MEMORY SWITCHING PROTOCOL WHEN SWITCHING OPTICALLY-CONNECTED MEMORY ” Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAVAN MAMILLAPALLI whose telephone number is (571)270-3836 . The examiner can normally be reached on M-F. 8am - 4pm, EST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J Lo can be reached on (571) 272-9767 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAVAN MAMILLAPALLI/ Primary Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Aug 28, 2023
Application Filed
Mar 15, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602389
RECOMMENDATION WORD DETERMINATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12603155
METHODS FOR COMPRESSION OF MOLECULAR TAGGED NUCLEIC ACID SEQUENCE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12601597
GENERATING, FROM DATA OF FIRST LOCATION ON SURFACE, DATA FOR ALTERNATE BUT EQUIVALENT SECOND LOCATION ON THE SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12602503
GENERATING, FROM DATA OF FIRST LOCATION ON SURFACE, DATA FOR ALTERNATE BUT EQUIVALENT SECOND LOCATION ON THE SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12591580
CONFIDENCE FABRIC ENHANCED PRIVACY-PRESERVING DATA AGGREGATION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+17.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 743 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month