Prosecution Insights
Last updated: April 19, 2026
Application No. 17/812,461

STORAGE DEVICE AND METHOD OF OPERATING THE SAME

Non-Final OA §103
Filed
Jul 14, 2022
Examiner
RIGOL, YAIMA
Art Unit
2135
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
464 granted / 619 resolved
+20.0% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
18 currently pending
Career history
637
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
9.2%
-30.8% vs TC avg
§112
17.5%
-22.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION As per the instant application having Application No. 17/812,461: Applicant’s election without traverse of Group I (claims 1-10 and 16-20) in the reply filed on 1/6/2026 is acknowledged. Claims 11-15 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 1/6/2026. The restriction requires is deemed proper and is therefore made final. Claims 1-10 and 16-20 are ready for examination. The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. In the response to this Office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. INFORMATION CONCERNING DRAWINGS The applicant’s drawings submitted are acceptable for examination purposes. STATUS OF CLAIM FOR PRIORITY IN THE APPLICATION The instant Application No. 17812461 filed 07/14/2022 claims foreign priority to 10-2021-0118826, filed 09/07/2021. All certified copies of the priority documents have been received. ACKNOWLEDGEMENT OF REFERENCES CITED BY APPLICANT As required by M.P.E.P. 609(C), the applicant’s submission of the Information Disclosure Statement(s) dated 7/14/2022 is/are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C(2), a copy (copies) of the PTOL-1449(s) initialed and dated by the examiner is/are attached to the instant office action. CLAIM CONSTRUCTION The present application contains contingent limitations. Applicant is reminded that “the broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met.” See MPEP 2111.04(II). See Ex parte Schulhauser, Appeal No. 2013-007847, 2016 WL 6277792, at *9 (PTAB, Apr. 28, 2016) (precedential) (holding "The Examiner did not need to present evidence of the obviousness of the remaining method steps of the claim that are not required to be performed under a broadest reasonable interpretation of the claim"); see also Ex parte Katz, Appeal No. 2010-006083, 2011 WL 514314, at *4-5 (BPAI Jan. 27, 2011).” Board Decision pages 5-6, emphasis in original. It is suggested that the conditional statements be removed. Alternatively, the conditions precedent may be claimed affirmatively in order to give the claims their proper weight. For example: The limitations “when a number of iterations is not greater than a predetermined value, increasing…” (in claim 16) be amended to read “determining that the number of iterations is not greater than a predetermined value, and in response to the determining, increasing…”. REJECTIONS BASED ON PRIOR ART Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8, 10, 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kale et al. (US 2021/0255799) in view of OH (US 2019/0114078) and Zalivaka et al. (US 2022/0326876). 1. A method of operating a storage device, comprising: receiving a learning request for learning a new parameter value for a parameter; [Kale teaches “the ANN can be trained using a supervised learning technique to refine or establish a prediction model.” (par. 0045) “[0046] For example, the current operating parameters of the vehicle, applications, and/or the data storage device can be provided as input to the ANN to derive the predicted workload for the subsequent time period, the preferred cache scheme, the optimized background maintenance schedule, and the preferred performance throttling within the time period. Subsequent changes in the performance and temperature of the data storage device can be measured as a result of changing in caching/buffering aspects, in the timing and frequency of background maintenance processes, and/or in the performance throttling. The measurements of performance, temperature, and the implemented parameters of caching/buffering, background maintenance processes, and the performance throttling pattern within in period of time can be used input data in machine learning to improve the predictive capability of the ANN.”] but does not expressly disclose the storage device receiving a learning request evaluating a performance of a workload using a current parameter value of the parameter to generate performance metrics; [Kale teaches “[0064]… the workload of the data storage device (112) can be determined from the patterns in the input/output data streams (103 and 109). The operating condition can be used to predict the optimized parameters and configurations of buffering/caching (106) and the optimized timing and frequency of background maintenance operations (e.g., 107 and 108).”] performing machine learning in response to the learning request using a plurality of learning models to infer relational expressions between the parameter and the performance metrics, using performance evaluation information according to a performance evaluation of the workload; [Kale taches “[0050] The data storage device (112) stores a model of an Artificial Neural Network (ANN) (125). The inference engine (101) uses the ANN (125) to predict parameters and configurations of operations of the data storage device (112), such as buffering/caching (106), garbage collection (107), wear leveling (108), predicted operations in the queue (110), etc. to optimize the measured performance of the data storage device (112) and to keep the temperature as measured by the temperature sensor (102) within a predetermined range.” (see pars. 0054, 0181, 0195) where the learning operations are performed based on the storage device workload (see par. 0064 citation above)] but does not expressly disclose the learning using a plurality of learning models deriving the new parameter value using the inferred relational expressions; and applying the new parameter value to a firmware algorithm [Kale teaches “[0054] Further, the controller (151) can perform background maintenance operations, such as garbage collection (107), wear leveling (108), etc. The timing and frequency of the background maintenance operations can impact the performance of the data storage device (112). The inference engine (101) uses the ANN (125) to determine the timing and frequency of the maintenance operations (e.g., 107, 108) to optimize the performance measured for the data storage device (112), based on the patterns in the input data stream (103) and/or the output data stream (109).”] where the new parameters or timing and frequency for maintenance operations are applied to the software or firmware operating the memory device, note that Kale teaches [“memory (135) storing firmware (or software) (147),” (par. 0077) “ hardwired circuitry may be used in combination with software instructions to implement the techniques.” (par. 0222)] but Kale does not expressly refer to applying the new parameter value to a firmware algorithm. With respect to the limitation of the storage device receiving a learning request… learning… using a plurality of learning models, OH teaches [“[0026]… The model classifier 160 sends a model selection request MSR indicating the selected model to the storage device 200.” “[0027] The storage device 200 includes a model selection module 234, a model execution processor 240 (e.g., a central processing unit, a digital signal processor, etc.), and a nonvolatile memory device 280. The model selection module 234 selects a model depending on the model selection request MSR. In an embodiment, the host device 100 sends a signal to the storage device 200 including the model selection request MSR.” Where “[0027] The storage device 200 includes a model selection module 234, a model execution processor 240 (e.g., a central processing unit, a digital signal processor, etc.), and a nonvolatile memory device 280. The model selection module 234 selects a model depending on the model selection request MSR. In an embodiment, the host device 100 sends a signal to the storage device 200 including the model selection request MSR. The model selection module 234 may load model data MOD of the selected model on the model execution processor 240. In an embodiment, the model selection module 234 is implemented as a computer program that is executed by a processor of the storage device 234. In an embodiment, the model selection module 234 is implemented by a logic circuit, a memory or registers storing the model data MOD of each of a plurality of models, and a multiplexer input with signals indicative of the models, where the model selection request MSR is applied as a control signal to the multiplexer to cause output of one of the signals to the logic circuit, and the logic circuit loads the corresponding model data MOD onto the model execution processor 240.”]. Kale and OH are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify Kale to include the storage device receiving a learning request… learning… using a plurality of learning models, as taught by OH since doing so would provide the benefits of [“a storage device that provides an optimum operating performance to each user” (par. 0006)]. The combination of Kale and OH does not expressly disclose applying the new parameter value to a firmware algorithm; however, regarding these limitations, Zalivaka teaches [“[0112] The recurrent neural network coder 500 may operate in a training mode or an inference mode. In an initial stage (i.e., the training mode), the recurrent neural network coder 500 may be trained using a data set of possible workload types covering typical drive operation scenarios. Then weight matrices of the model associated with the recurrent neural network coder 500 and compact representations of typical workloads may be stored in a storage (e.g., a DRAM) or a memory device (e.g., NAND). The training mode may be performed offline using an external compute engine. In the inference mode, the recurrent neural network coder 500 may process input workload using the weight matrices. Essential FW parameters (e.g., garbage collection algorithm, read voltage thresholds, error correction schemes, etc.) may be changed based on compact workload representation and FW state (e.g., the value of counters, used over-provisioning memory, the number of bad blocks, etc.). The self-testing algorithm may be based on the generation of compact workload vectors and transforming them into the commands internally in the controller. As a result, the low-dimensional space of the compact workloads representation may be covered with a better diversity.”]. Kale, OH and Zalivaka are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Kale and OH to include applying the new parameter value to a firmware algorithm as taught by Zalivaka, which may include maintenance operation parameter such as those taught by Kale since doing would provide the benefits of [“ In an embodiment, a memory controller may be capable of being aware of input workloads based on a compact representation for input workloads in a memory system (e.g., SSD such as NAND flash memory devices) and may perform an operation (i.e., tuning of firmware parameters) in order to optimize its performance.” (par. 0142)]. Therefore, it would have been obvious to combine Kale, OH and Zalivaka for the benefit of creating a storage system/method to obtain the invention as specified in claim 1. 2. The method of claim 1, wherein the parameter is one of a write throttling latency and a garbage collection to write ratio [Kale teaches “For example, the ANN can be configured to predict the combination of a caching/buffering implementation and the timing and frequency of garbage collection and wear leveling such that the performance of the data storage device is optimized for a subsequent time period of operations without the temperature of the data storage device reaching a threshold.” (par. 0042; see pars. 0054, 0181, 0203). OH teaches scheduled tasks may include garbage collection (par. 0109) and Zalivaka teaches “Essential FW parameters (e.g., garbage collection algorithm, read voltage thresholds, error correction schemes, etc.) may be changed based on compact workload representation and FW state (e.g., the value of counters, used over-provisioning memory, the number of bad blocks, etc.).” (par. 0112) “[0128] Referring back to FIG. 11, the controller 100 may provide tuning of FW parameters (e.g., garbage collection intensity) and drive self-testing using the encoder 1120 and the decoder 1130, respectively… [0130] Two typical workloads may be read-intensive and write-intensive. For these types of workloads, the flash translation layer 1110 may store two compact 5-dimensional (d=5) vectors: R.sub.1=(0.0617, −0.0981, 0.1380, 0.0215, 0.2057) and R.sub.2=(0.0029, −0.0038, −0.0013, −0.0014, 0.0052) correspondingly. For example, in the case of read-intensive workloads, garbage collection (GC) intensity may be set to the maximal level, whereas in the case of write-intensive workloads, the GC intensity may be set to the minimal level.”]. 3. The method of claim 1, further comprising entering a learning mode in response to the learning request [OH teaches “[0026]… The model classifier 160 sends a model selection request MSR indicating the selected model to the storage device 200.” “[0027] The storage device 200 includes a model selection module 234, a model execution processor 240 (e.g., a central processing unit, a digital signal processor, etc.), and a nonvolatile memory device 280. The model selection module 234 selects a model depending on the model selection request MSR. In an embodiment, the host device 100 sends a signal to the storage device 200 including the model selection request MSR.”]. 4. The method of claim 1, wherein the plurality of learning models include at least two of a throughput-related model, a write Quality of service (QoS)-related model, a read QoS-related model, and a reliability-related model [Kale teaches “[0042] An Artificial Neuron Network (ANN) (e.g., Spiking Neural Network (SNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN)) can be configured to predict, for the current operating condition of the data storage device, the configurations of the caching/buffering and background maintenance processes to optimize the performance of the data storage device while keeping the temperature of the data storage device within a safe range.” OH teaches “[0096] As described above, the storage device 200 may select and execute a model in response to a request of the host device 100. The storage device 200 may schedule requests transmitted from the host device 100, background operations, or foreground operations depending on the selected model. Since scheduling of tasks is performed on the basis of the machine learning and since a machine learning model is changed according to a preference of the user and an environment, performance, power consumption, and reliability are optimized according to preferences of users.” Zalivaka teaches “As such, it is necessary to provide a scheme to make an FTL aware of input workloads and optimize its performance based on the awareness of input workloads. Accordingly, embodiments provide a scheme for a compact representation of input workloads in a memory system (e.g., SSD such as NAND flash memory devices) and a memory controller capable of being aware of input workloads based on a compact representation for input workloads. Thus, embodiments may optimize performance and/or reliability of a memory system.” (par. 0065)]. 5. The method of claim 1, wherein the workload is a combination of host-queue-depth and read-write-mixed Ratio [Kale teaches “[0034] For example, an artificial neural network (e.g., a spiking neural network) can be used to monitor various aspects of the data storage device, such as temperature, pending operations in the queue of the data storage device, operation conditions of the data storage device that can be indicative of the workload of the data storage device in a subsequent time period.” Zalivaka teaches “[0101] The data set has 9 types (100 samples each) of workloads which are generated based on two parameters: queue depth (QD) and read/write ratio (RWR) which represents the ratio of read and write commands in the workload. All workloads are random and 9 workload types are shown in List2:”]. 6. The method of claim 1, further comprising storing the performance evaluation information according to a performance evaluation of the workload [Kale teaches “[0064]… the workload of the data storage device (112) can be determined from the patterns in the input/output data streams (103 and 109). The operating condition can be used to predict the optimized parameters and configurations of buffering/caching (106) and the optimized timing and frequency of background maintenance operations (e.g., 107 and 108).”]. 8. The method of claim 6, wherein the performing of the machine learning includes inferring the relational expressions using the plurality of learning models and the performance evaluation information [Kale teaches “[0073] A portion of the ANN (125) responsible for the processing of input from the sensors (122) can be configured in the data storage device (112). The inference engine (101) of the data storage device (112) processes the inputs from the sensors (122) to generate the inference results transmitted from the data storage device (112) to the ADAS (128). Based on the input from the sensors (122) and/or the inference results to the ADAS (128), the inference engine (101) data storage device (112) can generate inference results to optimize the performance of the data storage device (112) in processing the input data stream (103) and the output data stream (109), by adjusting the operations of buffering/caching (106), garbage collection (107), wear leveling (108), etc.” (see pars. 0050, 0054, 0056). OH teaches “[0054] As shown in table 5, the model classifier 160 may classify probability of performance centered and balance the highest. The model classifier 160 may send the first model selection request MSR1 to the device interface 170 so as to select a model corresponding to the performance centered and balance. For example, a balanced expected load may mean that the number of reads is expected to be equal to the number of writes or that the amount of resources used to perform the reads is expected to be the same as the amount of resources used to perform the writes.” Zalivaka teaches “[0142] As described above, embodiments provide a scheme to use a compact representation vector associated with input workloads. In an embodiment, a memory controller may be capable of being aware of input workloads based on a compact representation for input workloads in a memory system (e.g., SSD such as NAND flash memory devices) and may perform an operation (i.e., tuning of firmware parameters) in order to optimize its performance. In another embodiment, a test of a memory system (e.g., SSD test) may be performed with a much smaller test vector space.”]. 10. The method of claim 1, wherein the deriving of the new parameter value is repeated a predetermined number of times [Kale teaches “[0038] For example, an Artificial Neuron Network (ANN) (e.g., a Spiking Neural Network (SNN), a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), or any combination thereof) can be configured to predict data changes and/or movements to be implemented in the data storage device and thus predict future power/temperature based on access patterns (e.g., read/write), access frequency, address locations, chunk sizes, operation conditions/environment, etc. Intelligent throttling of data storage activities can improve user experiences by avoiding rigidly-forced throttling of performance of the data storage device, which can be a result of temperature exceeding a threshold.” (see par. 0064). Zalivaka teaches “[0089] The model of the RNN coder in FIG. 7 may be trained using a data set containing M workloads, which may have different characteristics. The training process may tune weighting matrices W.sup.e.sub.X, W.sup.e.sub.h, W.sup.e.sub.Y, W.sup.d.sub.X, W.sup.d.sub.h, W.sup.d.sub.Y such that the difference between the source workload (C.sub.1, C.sub.2, . . . , C.sub.N) and the recovered workload (Ĉ.sub.1, Ĉ.sub.2, . . . , Ĉ.sub.N) is minimized. Different optimization algorithms such as Gradient descent, RMSProp, Adam, etc. may be used in the training process. The model may have two hyperparameters N and d. N represents the number of RBs in the encoder and decoder and d represents the dimension of target compact workload representation vector R.” “[0112] The recurrent neural network coder 500 may operate in a training mode or an inference mode. In an initial stage (i.e., the training mode), the recurrent neural network coder 500 may be trained using a data set of possible workload types covering typical drive operation scenarios. Then weight matrices of the model associated with the recurrent neural network coder 500 and compact representations of typical workloads may be stored in a storage (e.g., a DRAM) or a memory device (e.g., NAND). The training mode may be performed offline using an external compute engine. In the inference mode, the recurrent neural network coder 500 may process input workload using the weight matrices. Essential FW parameters (e.g., garbage collection algorithm, read voltage thresholds, error correction schemes, etc.) may be changed based on compact workload representation and FW state (e.g., the value of counters, used over-provisioning memory, the number of bad blocks, etc.).”; thus, repeating the deriving of parameter values for each of the workloads or a predetermined number of times]. 16. A method of operating a storage device, comprising: receiving a learning request for learning a new parameter value for a parameter; [Kale teaches “the ANN can be trained using a supervised learning technique to refine or establish a prediction model.” (par. 0045) “[0046] For example, the current operating parameters of the vehicle, applications, and/or the data storage device can be provided as input to the ANN to derive the predicted workload for the subsequent time period, the preferred cache scheme, the optimized background maintenance schedule, and the preferred performance throttling within the time period. Subsequent changes in the performance and temperature of the data storage device can be measured as a result of changing in caching/buffering aspects, in the timing and frequency of background maintenance processes, and/or in the performance throttling. The measurements of performance, temperature, and the implemented parameters of caching/buffering, background maintenance processes, and the performance throttling pattern within in period of time can be used input data in machine learning to improve the predictive capability of the ANN.”] but does not expressly disclose the storage device receiving a learning request] evaluating a workload performance for a current value of the parameter to generate performance metrics; [Kale teaches “[0039] The data storage device can have various configurable parameters and operations that have different impacts on the performance of the data storage device under various conditions. Performance of the data storage can be measured based on the latency (response time) for read/write (input/output) requests, and/or the number of read/write (input/output) requests that the data storage device processes per unit of time.” “[0064]… the workload of the data storage device (112) can be determined from the patterns in the input/output data streams (103 and 109). The operating condition can be used to predict the optimized parameters and configurations of buffering/caching (106) and the optimized timing and frequency of background maintenance operations (e.g., 107 and 108).”] storing the performance metrics; [Kale teaches “[0201] For example, the data storage device can search the operation schedule (235) to optimize performance of data storage device by: generating different candidate operation schedules to control the operations of the data storage device; predicting temperatures of the data storage device in executing operations according to the different operation schedules; and selecting the operation schedule from the different operation schedules based on performance levels of operation schedules and predicted temperatures of the operation schedules… [0203] For example, during a training period, the data storage device (112) can generate different operation schedules and perform operations according to different operation schedules to select operation schedules that does not cause the measurement of the temperature sensor (102) to exceed the threshold. The average performance of the operation schedules that keep the temperature of the data storage device (112) under the threshold can be measured by the data storage device (112) (e.g., the form of an average latency for commands received in a period of a predetermine length). The training data generated in the training period can be used to train the ANN (125) to predict an optimized operation schedule for the operating condition represented by the operating parameters (233).” Where the operating schedule in storage device also contains performance level information inferring relational expressions between the parameter and the workload performance using the performance metrics; [Kale taches “[0050] The data storage device (112) stores a model of an Artificial Neural Network (ANN) (125). The inference engine (101) uses the ANN (125) to predict parameters and configurations of operations of the data storage device (112), such as buffering/caching (106), garbage collection (107), wear leveling (108), predicted operations in the queue (110), etc. to optimize the measured performance of the data storage device (112) and to keep the temperature as measured by the temperature sensor (102) within a predetermined range.” (see pars. 0054, 0181, 0195) where the learning operations are performed based on the storage device workload (see par. 0064 citation above)] deriving a new value of the parameter using the relational expressions; incorporating the new value of the parameter into a firmware algorithm; and [Kale teaches “[0054] Further, the controller (151) can perform background maintenance operations, such as garbage collection (107), wear leveling (108), etc. The timing and frequency of the background maintenance operations can impact the performance of the data storage device (112). The inference engine (101) uses the ANN (125) to determine the timing and frequency of the maintenance operations (e.g., 107, 108) to optimize the performance measured for the data storage device (112), based on the patterns in the input data stream (103) and/or the output data stream (109).”] where the new parameters or timing and frequency for maintenance operations are applied to the software or firmware operating the memory device, note that Kale teaches [“memory (135) storing firmware (or software) (147),” (par. 0077) “ hardwired circuitry may be used in combination with software instructions to implement the techniques.” (par. 0222)] but Kale does not expressly refer to applying the new parameter value to a firmware algorithm when a number of iterations is not greater than a predetermined value, increasing the number of iterations by 1 and re-performing the evaluating of the workload performance [Note that the limitations “when…” are contingent limitations and as such, the condition precedent “when…” may never be reached within the scope of the claim under the broadest reasonable interpretation. Applicant is reminded that “the broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met.” See MPEP211.04(II). (see Claim Construction section above)]. With respect to the limitation of the storage device receiving a learning request, OH teaches [“[0026]… The model classifier 160 sends a model selection request MSR indicating the selected model to the storage device 200.” “[0027] The storage device 200 includes a model selection module 234, a model execution processor 240 (e.g., a central processing unit, a digital signal processor, etc.), and a nonvolatile memory device 280. The model selection module 234 selects a model depending on the model selection request MSR. In an embodiment, the host device 100 sends a signal to the storage device 200 including the model selection request MSR.” Where “[0027] The storage device 200 includes a model selection module 234, a model execution processor 240 (e.g., a central processing unit, a digital signal processor, etc.), and a nonvolatile memory device 280. The model selection module 234 selects a model depending on the model selection request MSR. In an embodiment, the host device 100 sends a signal to the storage device 200 including the model selection request MSR. The model selection module 234 may load model data MOD of the selected model on the model execution processor 240. In an embodiment, the model selection module 234 is implemented as a computer program that is executed by a processor of the storage device 234. In an embodiment, the model selection module 234 is implemented by a logic circuit, a memory or registers storing the model data MOD of each of a plurality of models, and a multiplexer input with signals indicative of the models, where the model selection request MSR is applied as a control signal to the multiplexer to cause output of one of the signals to the logic circuit, and the logic circuit loads the corresponding model data MOD onto the model execution processor 240.”]. Kale and OH are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify Kale to include the storage device receiving a learning request… learning… using a plurality of learning models, as taught by OH since doing so would provide the benefits of [“a storage device that provides an optimum operating performance to each user” (par. 0006)]. The combination of Kale and OH does not expressly disclose applying the new parameter value to a firmware algorithm; however, regarding these limitations, Zalivaka teaches [“[0112] The recurrent neural network coder 500 may operate in a training mode or an inference mode. In an initial stage (i.e., the training mode), the recurrent neural network coder 500 may be trained using a data set of possible workload types covering typical drive operation scenarios. Then weight matrices of the model associated with the recurrent neural network coder 500 and compact representations of typical workloads may be stored in a storage (e.g., a DRAM) or a memory device (e.g., NAND). The training mode may be performed offline using an external compute engine. In the inference mode, the recurrent neural network coder 500 may process input workload using the weight matrices. Essential FW parameters (e.g., garbage collection algorithm, read voltage thresholds, error correction schemes, etc.) may be changed based on compact workload representation and FW state (e.g., the value of counters, used over-provisioning memory, the number of bad blocks, etc.). The self-testing algorithm may be based on the generation of compact workload vectors and transforming them into the commands internally in the controller. As a result, the low-dimensional space of the compact workloads representation may be covered with a better diversity.”]. Kale, OH and Zalivaka are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Kale and OH to include applying the new parameter value to a firmware algorithm as taught by Zalivaka, which may include maintenance operation parameter such as those taught by Kale since doing would provide the benefits of [“ In an embodiment, a memory controller may be capable of being aware of input workloads based on a compact representation for input workloads in a memory system (e.g., SSD such as NAND flash memory devices) and may perform an operation (i.e., tuning of firmware parameters) in order to optimize its performance.” (par. 0142)]. Therefore, it would have been obvious to combine Kale, OH and Zalivaka for the benefit of creating a storage system/method to obtain the invention as specified in claim 16. 17. The method of claim 16, wherein the inferring of the relational expressions includes performing machine learning using each of a plurality of learning models to infer the relational expressions [The rationale in the rejection of claims 1 and 8 is herein incorporated]. 18. The method of claim 16, wherein at least one of the performance metrics includes a predetermined percentile latency of a write latency [Kale teaches “[0039] The data storage device can have various configurable parameters and operations that have different impacts on the performance of the data storage device under various conditions. Performance of the data storage can be measured based on the latency (response time) for read/write (input/output) requests, and/or the number of read/write (input/output) requests that the data storage device processes per unit of time.”]. 19. The method of claim 16, further comprising selecting the performance metrics related to the parameter [The rationale in the rejection of claim 6 is herein incorporated]. 20. The method of claim 16, wherein the performance metrics include measures related to throughput, write QoS, read QoS, or reliability [The rationale in the rejection of claim 4 is herein incorporated]. Claim 7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Kale et al. (US 2021/0255799) in view of OH (US 2019/0114078) and Zalivaka et al. (US 2022/0326876) as applied in the rejection of claim 1 above, and further in view of Idicula et al. (US 11061902). 7. The method of claim 6, wherein the storing of the performance evaluation information includes storing the firmware algorithm, a parameter set, and the performance metrics in a form of a table [Kale teaches “[0054] Further, the controller (151) can perform background maintenance operations, such as garbage collection (107), wear leveling (108), etc. The timing and frequency of the background maintenance operations can impact the performance of the data storage device (112). The inference engine (101) uses the ANN (125) to determine the timing and frequency of the maintenance operations (e.g., 107, 108) to optimize the performance measured for the data storage device (112), based on the patterns in the input data stream (103) and/or the output data stream (109)…[0055] Further, the controller (151) can throttle performance of the operations requested in the queue (110) and/or the background maintenance operations (e.g., 107, 108). For example, the controller (151) can perform operations at a reduced clock to spread the heat generated by operations over a longer period of time. For example, the controller (151) can periodically enter an idle state to such that operations are performed over a period of time longer than the duration of performing the operations without entering the idle state. For example, the controller (151) can idle a period of time to cool down and then perform the operations with high performance without idling. The ANN (125) can be used to predict a preferred throttling scheme to keep the temperature measured by the temperature sensor (102) within a predefined range, while maximizing the average performance of the data storage device (112) over a period of time… [0064]… the workload of the data storage device (112) can be determined from the patterns in the input/output data streams (103 and 109). The operating condition can be used to predict the optimized parameters and configurations of buffering/caching (106) and the optimized timing and frequency of background maintenance operations (e.g., 107 and 108).” Zalivaka teaches “[0142] As described above, embodiments provide a scheme to use a compact representation vector associated with input workloads. In an embodiment, a memory controller may be capable of being aware of input workloads based on a compact representation for input workloads in a memory system (e.g., SSD such as NAND flash memory devices) and may perform an operation (i.e., tuning of firmware parameters) in order to optimize its performance. In another embodiment, a test of a memory system (e.g., SSD test) may be performed with a much smaller test vector space.” (see par. 0112)] thus teaching a firmware algorithm having different configuration, parameter sets as well as performance but the combination of Kale, OH and Zalivaka does not expressly disclose storing these values in a table; however, regarding these limitations, Idicula teaches [“Thus, according to an embodiment, prior to training the AC-ML models, ML service 150 uses machine learning techniques to identify a set of impactful configuration parameters, which affect workload performance metrics. Specifically, ML service 150 trains one or more configuration parameter evaluation machine learning (CPE-ML) models, over a similar (or the same) training corpus as is used to train the AC-ML models, to identify which configuration parameters affect one or more performance metrics. According to an embodiment, once ML service 150 performs inference over the trained CPE-ML models for every possible configuration parameter to determine which configuration parameters affect performance metrics, ML service 150 maintains the information identifying the impactful configuration parameters, e.g., in a database table.” (col. 9, line 63 – col. 10, line 10)]. Kale, OH, Zalivaka and Idicula are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Kale, OH and Zalivaka to include storing of the information for the learning system such as firmware parameters, parameter set and performance metrics in a table, as taught by Idicula, since doing so would provide the benefits of facilitating access to the learned information as well as facilitating optimization strategies for machine learning that result in optimal predicted throughput performance for a workload (col. 7, lines 49-58). Therefore, it would have been obvious to combine Kale, OH, Zalivaka and Idicula for the benefit of creating a storage system/method to obtain the invention as specified in claim 7. 9. The combination of Kale, OH and Zalivaka teaches The method of claim 1, but does not expressly disclose wherein the deriving of the new parameter includes deriving the new parameter value from the inferred relational expressions using a Bayesian optimization scheme; however, regarding these limitations, Idicula teaches [“Different optimization strategies can be used to converge to an optimal set of CSFs for workload 160 based on (a) changing CSFs used to identify predicted performance metrics from the AC-ML models, and (b) observing the resulting changes to the predicted throughput performance metrics. For example, random search, grid search, and Bayesian optimization are all candidate optimization strategies that can be used by ML service 150 to converge to a set of CSFs that result in optimal predicted throughput performance for workload 160. “ (col. 7, lines 49-58). “Classes of problems that machine learning excels at include clustering, classification, regression, anomaly detection, prediction, and dimensionality reduction (i.e. simplification). Examples of machine learning algorithms include decision trees, support vector machines (SVM), Bayesian networks, stochastic algorithms such as genetic algorithms (GA), and connectionist topologies such as artificial neural networks (ANN).” (col. 16, lines 13-20)]. Kale, OH, Zalivaka and Idicula are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Kale, OH and Zalivaka to include deriving the new parameter value from the inferred relational expressions using a Bayesian optimization scheme as taught by Idicula, since doing so would provide the benefits of facilitating optimization strategies for machine learning that result in optimal predicted throughput performance for a workload (col. 7, lines 49-58). Therefore, it would have been obvious to combine Kale, OH, Zalivaka and Idicula for the benefit of creating a storage system/method to obtain the invention as specified in claim 9. RELEVANT ART CITED BY THE EXAMINER The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant’s art and those arts considered reasonably pertinent to applicant’s disclosure. See MPEP 707.05(c). Liu et al. (US 20200133758) teaches “[0055] In general, a threshold TPR or threshold FPR may be configured during model training, and then, a specific configuration, hyperparameters, a training solution (for example, the number of times of iterations of training, use of training data, a convergence objective, a training algorithm and the like), and the like, of the machine learning model, may be selected based on the threshold TPR or threshold FPR. The model parameters are optimized constantly by training, such that the machine learning model 220 meets a predetermined performance metric.” CLOSING COMMENTS a. STATUS OF CLAIMS IN THE APPLICATION a(1) CLAIMS REJECTED IN THE APPLICATION Per the instant office action, claims 1-10 and 16-20 have received a first action on the merits and are subject of a first action non-final. a(2) CLAIMS NO LONGER UNDER CONSIDERATION Claims 11-15 have been withdrawn from consideration. b. DIRECTION OF FUTURE CORRESPONDENCES Any inquiry concerning this communication or earlier communications from the examiner should be directed to YAIMA RIGOL whose telephone number is (571)272-1232. The examiner can normally be reached Monday-Friday 9:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared I. Rutz can be reached on (571) 272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. February 12, 2026 /YAIMA RIGOL/ Primary Examiner, Art Unit 2135
Read full office action

Prosecution Timeline

Jul 14, 2022
Application Filed
Feb 12, 2026
Non-Final Rejection — §103
Mar 26, 2026
Examiner Interview Summary
Mar 26, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591522
COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN MEMORY ACCESS CONTROL PROGRAM, MEMORY ACCESS CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12585581
MEMORY MODULE HAVING VOLATILE AND NON-VOLATILE MEMORY SUBSYSTEMS AND METHOD OF OPERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579073
APPARATUS AND METHOD FOR INTELLIGENT MEMORY PAGE MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12578899
MEMORY DEVICE, MEMORY SYSTEM, MEMORY CONTROLLER, AND OPERATION METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12566716
SYSTEMS AND METHODS FOR TIMESTEP SHARED MEMORY MULTIPROCESSING BASED ON TRACKING TABLE MECHANISMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
92%
With Interview (+17.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month