Prosecution Insights
Last updated: April 19, 2026
Application No. 18/784,231

MODIFYING MACHINE LEARNING PARAMETERS IN MEMORY SYSTEMS

Final Rejection §103
Filed
Jul 25, 2024
Examiner
WARREN, TRACY A
Art Unit
2137
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
344 granted / 422 resolved
+26.5% vs TC avg
Moderate +6% lift
Without
With
+6.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
22 currently pending
Career history
444
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
17.6%
-22.4% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 422 resolved cases

Office Action

§103
DETAILED ACTION Response to Amendment The Amendment filed December 15, 2025 has been entered. Claims 1-7, 27-31, and 35-47 remain pending in the application. Applicant's amendments to the claims have overcome the 35 U.S.C. 103 rejections previously set forth in the Non-Final Office Action mailed September 26, 2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6, 10-12, 14-17, 27-28, 30-31, 35-37, 39, 42-45, and 47 are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 2024/0319871), Hari et al. (US 2025/0332725), and Liu et al. (US 2021/0279574). Regarding claim 1, Kim discloses: A system, comprising: one or more memory devices (FIG. 3 DRAM Package 330); and a memory module controller (FIG. 3 CXL Device 220) comprising: a memory subsystem interface (FIG. 3 DRAM IF Circuit 260); and a controller (FIG. 3 AXL1 310) configured to: obtain, from one or more host systems (FIG. 3 Host 210), a command indicating that one or more first parameters associated with a full precision dataset are to be modified (FIG. 9 step 910 Determine to operate based on first mode in which only first accelerator is activated; [0044] CXL controller 250 may receive an instruction and a configuration from the host 210…; [0084] the CXL device 220 may determine to activate only the first accelerator 310 in order to lighten parameters of a neural network. For another example, the CXL device 220 may determine to activate only the first accelerator 310 in advance to convert sparse data into dense data…Specifically, a mode setting circuit 330 of the CXL device 220 may determine to operate in the first mode in which only the first accelerator 310 is activated, and generate a control signal for instructing the first mode. The mode setting circuit 330 may provide the control signal to each of the first accelerator 310) from a first format to a second format ([0057] first to Nth operation circuits 311 to 31N included in the first accelerator 310 may include operation circuits for performing various lightening methods of reducing the size of the deep learning model. For example, the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight (i.e., parameter) expressed as a 32 bit floating point (i.e., first format) into a 16 bit floating point or an 8 bit integer type (i.e., second format))… obtain, based on obtaining the command, the one or more first parameters from the one or more source addresses, the one or more first parameters having the first format ([0050] first accelerator 310 may receive data from the plurality of DRAMs 231, 233, 235, and 237 and perform a primary operation based on the received data; [0057] first to Nth operation circuits 311 to 31N included in the first accelerator 310 may include operation circuits for performing various lightening methods of reducing the size of the deep learning model. For example, the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight (i.e., parameter) expressed as a 32 bit floating point (i.e., first format) into a 16 bit floating point or an 8 bit integer type (i.e., second format)); generate, based on the one or more first parameters, one or more second parameters associated with the full precision dataset, the one or more second parameters having the second format (FIG. 9 step 920 Perform acceleration on requested data by using first accelerator inside DRAM package; [0057]); and Kim does not appear to explicitly teach “the command indicating one or more source addresses and one or more destination addresses, wherein the command includes an indication of the first format and an indication of the second format…store the one or more second parameters to the one or more destination addresses.” However, Hari et al. disclose: the command indicating one or more source addresses and one or more destination addresses, wherein the command includes an indication of the first format and an indication of the second format ([0021] instruction 102 comprises an operation code (opcode) and (optionally, depending on the opcode) one or more operands. The opcode specifies the operation for the execution unit 108 of processor 104 to carry out. One or more of the operands may specify source locations of data to operate on, or control settings or characteristics (e.g., formatting) of the data to operate on. The source operands may be applied to a fetch unit 106 of the processor 104 for retrieval of values from registers or other locations in machine memory (cache, bulk main memory, etc.). One or more of the operands may specify a destination location for returning results on executing the opcode operation on the source operands; The operation specified by the opcode is an indication of the second format that the data is to be transformed to. In this case the data is being transformed to Kim’s 8 bit format. The operand for the formatting of the data to operate on corresponds to Kim’s 32 bit format (i.e., the first format).) Kim and Hari et al. are analogous art because Kim teach converting sparse data into dense data in a memory system and Hari et al. teach computer instructions. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Kim and Hari et al. before him/her, to modify the teachings of Kim with the Hari et al. teachings of the use of an opcode because including operand for both source and destination addresses informs the controller of where to find the data and where to store the data and including an operand for the format of the data to be operated on informs the data of the type of data. Kim and Hari et al. do not appear to explicitly teach “store the one or more second parameters to the one or more destination addresses.” However, Liu et al. disclose: store the one or more second parameters to the one or more destination addresses ([0061] the storage step S640, the storage unit 540 stores the quantized neural network). Kim, Hari et al., and Liu et al. are analogous art because Kim teach converting sparse data into dense data in a memory system; Hari et al. teach computer instructions; and Liu et al. teach processing a generating quantized neural network. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Kim, Hari et al., and Liu et al. before him/her, to modify the teachings of Kim and Hari et al. with the Liu et al. teachings of storing the quantized data because so would prevent the loss of the data. Regarding claim 2, Kim further discloses: The system of claim 1, wherein the controller is further configured to: …store the one or more first parameters to the one or more memory devices, wherein obtaining the command indicating that the one or more first parameters are to be modified is based on storing the one or more first parameters ([0079] second DRAM 620 to the fourth DRAM 640 may provide the sparse data to the first DRAM 610 through a bonding wire, and the first accelerator 650 may perform the primary operation on the sparse data received from the first DRAM 610 to the fourth DRAM 640 (i.e., the parameters have been stored in the memory devices)). Kim and Hari et al. do not appear to explicitly teach “receive the one or more first parameters from the one or more host systems.” However, Liu et al. further disclose: receive the one or more first parameters from the one or more host systems ([0031] the user (i.e., host) can input for example neural networks to be quantized, specific task processing information (e.g. object detection task), etc., via the input device 150, wherein the neural networks to be quantized include for example various weights (e.g. floating-point weights)); and Regarding claim 3, Kim further discloses: The system of claim 1, wherein the controller is further configured to: provide, based on storing the one or more second parameters, the one or more second parameters to the one or more host systems (FIG. 9 step S930 Bypass second accelerator inside CXL controller and provide result of performing acceleration operation to host). Regarding claim 6, Hari et al. further disclose: The system of claim 1, wherein the one or more source addresses comprise one or more physical source addresses and the one or more destination addresses comprise one or more physical destination addresses ([0021] One or more of the operands may specify source locations of data to operate on, or control settings or characteristics (e.g., formatting) of the data to operate on. The source operands may be applied to a fetch unit 106 of the processor 104 for retrieval of values from registers or other locations in machine memory (cache, bulk main memory, etc.). One or more of the operands may specify a destination location for returning results on executing the opcode operation on the source operands). Regarding claim 10, Kim further discloses: The system of claim 1, wherein, to generate the one or more second parameters, the controller is configured to: apply one or more quantization functions to the one or more first parameters to calculate the one or more second parameters ([0057] the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight expressed as a 32 bit floating point into a 16 bit floating point or an 8 bit integer type). Regarding claim 11, Kim further discloses: The system of claim 10, wherein the controller is further configured to: obtain, from the one or more host systems, the one or more quantization functions ([0044] the CXL controller 250 may receive an instruction and a configuration from the host 210… The command generator 255 may generate a command suitable for DRAM based on the decoding information of the instruction and transmit the generated command to the DRAM interface circuit 260…[0045] the DRAM interface circuit 260 may provide a control signal for controlling the plurality of DRAM packages 230 to the plurality of DRAM packages 230 based on the command generated by the command generator 255; [0057]). Regarding claim 12, Kim further discloses: The system of claim 10, wherein the command indicates the one or more quantization functions ([0044]-[0045]; [0057] the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight expressed as a 32 bit floating point into a 16 bit floating point or an 8 bit integer type). Regarding claim 14, Kim further discloses: The system of claim 1, wherein the command indicates at least one of the first format or the second format ([0057] the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight expressed as a 32 bit floating point into a 16 bit floating point or an 8 bit integer type). Regarding claim 15, Kim further discloses: The system of claim 1, wherein the first format corresponds to a first quantity of bits for a first parameter of the one or more first parameters and the second format corresponds to a second quantity of bits for a second parameter of the one or more second parameters, the second quantity of bits less than the first quantity of bits ([0057] the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight expressed as a 32 bit floating point into a 16 bit floating point or an 8 bit integer type). Regarding claim 16, Kim further discloses: The system of claim 1, wherein the controller is a near-memory computing (NMC) controller ([0077] When the first accelerator 750 is disposed on a separate die from the first DRAM 710 to fourth DRAM 740 that are three-dimensionally stacked, the first accelerator 750 may be referred to as a process-near-memory (PNM)). Regarding claim 17, Kim further discloses: The system of claim 1, wherein the one or more first parameters and the one or more second parameters are neural network parameters associated with the full precision dataset ([0057]; [0085] When the requested data corresponds to sparse data, the acceleration operation is for parameter lightening, may include at least pruning, zeroing, and quantization, and may correspond to coarse acceleration). Regarding claim 27, Kim discloses: A method, comprising: obtaining, by a memory apparatus (FIG. 3 CXL Device 220) and from one or more host systems (FIG. 3 Host 210), a first command indicating that one or more first parameters associated with a full precision dataset are to be modified (FIG. 9 step 910 Determine to operate based on first mode in which only first accelerator is activated; [0044] CXL controller 250 may receive an instruction and a configuration from the host 210…; [0084] the CXL device 220 may determine to activate only the first accelerator 310 in order to lighten parameters of a neural network. For another example, the CXL device 220 may determine to activate only the first accelerator 310 in advance to convert sparse data into dense data…Specifically, a mode setting circuit 330 of the CXL device 220 may determine to operate in the first mode in which only the first accelerator 310 is activated, and generate a control signal for instructing the first mode. The mode setting circuit 330 may provide the control signal to each of the first accelerator 310) from a first format to a second format ([0057] first to Nth operation circuits 311 to 31N included in the first accelerator 310 may include operation circuits for performing various lightening methods of reducing the size of the deep learning model. For example, the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight (i.e., parameter) expressed as a 32 bit floating point (i.e., first format) into a 16 bit floating point or an 8 bit integer type (i.e., second format)),… obtaining, by the memory apparatus and from the one or more host systems, a second command ([0044] CXL controller 250 may receive an instruction and a configuration from the host 210…) indicating that the one or more first parameters are to be modified from the first format to a third format ([0057] first to Nth operation circuits 311 to 31N included in the first accelerator 310 may include operation circuits for performing various lightening methods of reducing the size of the deep learning model. For example, the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight (i.e., parameter) expressed as a 32 bit floating point (i.e., first format) into a 16 bit floating point (i.e., third format) or an 8 bit integer type); generating, by the memory apparatus and based on the one or more first parameters, one or more second parameters associated with the full precision dataset, the one or more second parameters having the second format (FIG. 9 step 920 Perform acceleration on requested data by using first accelerator inside DRAM package; [0057]); generating, by the memory apparatus and based on the one or more first parameters, one or more third parameters associated with the full precision dataset, the one or more second parameters having the third format (FIG. 9 step 920 Perform acceleration on requested data by using first accelerator inside DRAM package; [0057]); and Kim does not appear to explicitly teach “wherein the command includes an indication of the first format and an indication of the second format…storing, by the memory apparatus, the one or more second parameters and the one or more third parameters.” However, Hari et al. disclose: wherein the command includes an indication of the first format and an indication of the second format ([0021] instruction 102 comprises an operation code (opcode) and (optionally, depending on the opcode) one or more operands. The opcode specifies the operation for the execution unit 108 of processor 104 to carry out. One or more of the operands may specify source locations of data to operate on, or control settings or characteristics (e.g., formatting) of the data to operate on. The source operands may be applied to a fetch unit 106 of the processor 104 for retrieval of values from registers or other locations in machine memory (cache, bulk main memory, etc.). One or more of the operands may specify a destination location for returning results on executing the opcode operation on the source operands; The operation specified by the opcode is an indication of the second format that the data is to be transformed to. In this case the data is being transformed to Kim’s 8 bit format. The operand for the formatting of the data to operate on corresponds to Kim’s 32 bit format (i.e., the first format).); The motivation for combining is based on the same rational presented for rejection of independent claim 1. Kim and Hari et al. do not appear to explicitly teach “storing, by the memory apparatus, the one or more second parameters and the one or more third parameters.” However, Liu et al. disclose: storing, by the memory apparatus, the one or more second parameters and the one or more third parameters ([0061] the storage step S640, the storage unit 540 stores the quantized neural network). The motivation for combining is based on the same rational presented for rejection of independent claim 1. Regarding claim 28, Kim, Hari et al., and Lui et al. do not appear to explicitly teach "prioritizing generating the one or more second parameters over generating the one or more third parameters based on obtaining the first command before obtaining the second command." However, one or ordinary skill in the art before the effective filing date would prioritize a first command received before a second command. Such a procedure is similar to a first in first out procedure. Additionally, there could a significant time delay between receiving the two commands and it would be inefficient to wait for the second command. Therefore, the combination of Kim, Hari et al., and Lui et al. disclose: The method of claim 27, further comprising: prioritizing generating the one or more second parameters over generating the one or more third parameters based on obtaining the first command before obtaining the second command. Regarding claim 30, Kim further discloses: The method of claim 27, further comprising: …storing the one or more first parameters to the memory apparatus ([0079] second DRAM 620 to the fourth DRAM 640 may provide the sparse data to the first DRAM 610 through a bonding wire, and the first accelerator 650 may perform the primary operation on the sparse data received from the first DRAM 610 to the fourth DRAM 640 (i.e., the parameters have been stored in the memory devices)). Kim does not appear to explicitly teach “receiving the one or more first parameters from the one or more host systems.” However, Liu et al. further disclose: receiving the one or more first parameters from the one or more host systems ([0031] the user (i.e., host) can input for example neural networks to be quantized, specific task processing information (e.g. object detection task), etc., via the input device 150, wherein the neural networks to be quantized include for example various weights (e.g. floating-point weights)); Regarding claim 31, Kim further discloses: The method of claim 27, further comprising: providing, based on storing the one or more second parameters, the one or more second parameters to the one or more host systems (FIG. 9 step S930 Bypass second accelerator inside CXL controller and provide result of performing acceleration operation to host); and providing, based on storing the one or more third parameters, the one or more third parameters to the one or more host systems (FIG. 9 step S930 Bypass second accelerator inside CXL controller and provide result of performing acceleration operation to host). Regarding claim 35, Kim discloses: A method, comprising: obtaining, by a memory apparatus (FIG. 3 CXL Device 220) and from one or more host systems (FIG. 3 Host 210), a command indicating that one or more first parameters associated with a full precision dataset are to be modified (FIG. 9 step 910 Determine to operate based on first mode in which only first accelerator is activated; [0044] CXL controller 250 may receive an instruction and a configuration from the host 210…; [0084] the CXL device 220 may determine to activate only the first accelerator 310 in order to lighten parameters of a neural network. For another example, the CXL device 220 may determine to activate only the first accelerator 310 in advance to convert sparse data into dense data…Specifically, a mode setting circuit 330 of the CXL device 220 may determine to operate in the first mode in which only the first accelerator 310 is activated, and generate a control signal for instructing the first mode. The mode setting circuit 330 may provide the control signal to each of the first accelerator 310) from a first format to a second format ([0057] first to Nth operation circuits 311 to 31N included in the first accelerator 310 may include operation circuits for performing various lightening methods of reducing the size of the deep learning model. For example, the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight (i.e., parameter) expressed as a 32 bit floating point (i.e., first format) into a 16 bit floating point or an 8 bit integer type (i.e., second format)),… obtaining, by the memory apparatus and based on obtaining the command, the one or more first parameters from the one or more source addresses, the one or more first parameters having the first format ([0050] first accelerator 310 may receive data from the plurality of DRAMs 231, 233, 235, and 237 and perform a primary operation based on the received data; [0057] first to Nth operation circuits 311 to 31N included in the first accelerator 310 may include operation circuits for performing various lightening methods of reducing the size of the deep learning model. For example, the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight (i.e., parameter) expressed as a 32 bit floating point (i.e., first format) into a 16 bit floating point or an 8 bit integer type (i.e., second format)); generating, by the memory apparatus and based on the one or more first parameters, one or more second parameters associated with the full precision dataset, the one or more second parameters having the second format (FIG. 9 step 920 Perform acceleration on requested data by using first accelerator inside DRAM package; [0057]); and Kim does not appear to explicitly teach “the command indicating one or more source addresses and one or more destination addresses, wherein the command includes an indication of the first format and an indication of the second format…storing, by the memory apparatus, the one or more second parameters to the one or more destination addresses.” However, Hari et al. disclose: the command indicating one or more source addresses and one or more destination addresses, wherein the command includes an indication of the first format and an indication of the second format ([0021] instruction 102 comprises an operation code (opcode) and (optionally, depending on the opcode) one or more operands. The opcode specifies the operation for the execution unit 108 of processor 104 to carry out. One or more of the operands may specify source locations of data to operate on, or control settings or characteristics (e.g., formatting) of the data to operate on. The source operands may be applied to a fetch unit 106 of the processor 104 for retrieval of values from registers or other locations in machine memory (cache, bulk main memory, etc.). One or more of the operands may specify a destination location for returning results on executing the opcode operation on the source operands; The operation specified by the opcode is an indication of the second format that the data is to be transformed to. In this case the data is being transformed to Kim’s 8 bit format. The operand for the formatting of the data to operate on corresponds to Kim’s 32 bit format (i.e., the first format).); The motivation for combining is based on the same rational presented for rejection of independent claim 1. Kim and Hari et al. do not appear to explicitly teach “storing, by the memory apparatus, the one or more second parameters to the one or more destination addresses.” However, Liu et al. disclose: storing, by the memory apparatus, the one or more second parameters to the one or more destination addresses ([0061] the storage step S640, the storage unit 540 stores the quantized neural network). The motivation for combining is based on the same rational presented for rejection of independent claim 1. Regarding claim 36, Kim further discloses: The method of claim 35, further comprising: storing the one or more first parameters to the memory apparatus, wherein obtaining the command indicating that the one or more first parameters are to be modified is based on storing the one or more first parameters ([0079] second DRAM 620 to the fourth DRAM 640 may provide the sparse data to the first DRAM 610 through a bonding wire, and the first accelerator 650 may perform the primary operation on the sparse data received from the first DRAM 610 to the fourth DRAM 640 (i.e., the parameters have been stored in the memory devices)). Kim does not appear to explicitly teach “receiving the one or more first parameters from the one or more host systems.” However, Liu et al. further disclose: receiving the one or more first parameters from the one or more host systems ([0031] the user (i.e., host) can input for example neural networks to be quantized, specific task processing information (e.g. object detection task), etc., via the input device 150, wherein the neural networks to be quantized include for example various weights (e.g. floating-point weights)); and Regarding claim 37, Kim further discloses: The method of claim 35, further comprising: providing, based on storing the one or more second parameters, the one or more second parameters to the one or more host systems (FIG. 9 step S930 Bypass second accelerator inside CXL controller and provide result of performing acceleration operation to host). Regarding claim 39, Kim discloses: A system, comprising: one or more memory devices (FIG. 3 DRAM Package 330); and a memory module controller (FIG. 3 CXL Device 220) comprising: a memory subsystem interface (FIG. 3 DRAM IF Circuit 260); and a controller (FIG. 3 AXL1 310) configured to: obtain, from one or more host systems (FIG. 3 Host 210), a first command indicating that one or more first parameters associated with a full precision dataset are to be modified (FIG. 9 step 910 Determine to operate based on first mode in which only first accelerator is activated; [0044] CXL controller 250 may receive an instruction and a configuration from the host 210…; [0084] the CXL device 220 may determine to activate only the first accelerator 310 in order to lighten parameters of a neural network. For another example, the CXL device 220 may determine to activate only the first accelerator 310 in advance to convert sparse data into dense data…Specifically, a mode setting circuit 330 of the CXL device 220 may determine to operate in the first mode in which only the first accelerator 310 is activated, and generate a control signal for instructing the first mode. The mode setting circuit 330 may provide the control signal to each of the first accelerator 310) from a first format to a second format ([0057] first to Nth operation circuits 311 to 31N included in the first accelerator 310 may include operation circuits for performing various lightening methods of reducing the size of the deep learning model. For example, the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight (i.e., parameter) expressed as a 32 bit floating point (i.e., first format) into a 16 bit floating point or an 8 bit integer type (i.e., second format)),… obtain, from the one or more host systems, a second command ([0044] CXL controller 250 may receive an instruction and a configuration from the host 210…) indicating that the one or more first parameters are to be modified from the first format to a third format ([0057] first to Nth operation circuits 311 to 31N included in the first accelerator 310 may include operation circuits for performing various lightening methods of reducing the size of the deep learning model. For example, the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight (i.e., parameter) expressed as a 32 bit floating point (i.e., first format) into a 16 bit floating point (i.e., third format) or an 8 bit integer type); generate, based on the one or more first parameters, one or more second parameters associated with the full precision dataset, the one or more second parameters having the second format (FIG. 9 step 920 Perform acceleration on requested data by using first accelerator inside DRAM package; [0057]); generate, based on the one or more first parameters, one or more third parameters associated with the full precision dataset, the one or more second parameters having the third format (FIG. 9 step 920 Perform acceleration on requested data by using first accelerator inside DRAM package; [0057]); and Kim does not appear to explicitly teach “wherein the command includes an indication of the first format and an indication of the second format…store the one or more second parameters and the one or more third parameters.” However, Liu et al. disclose: wherein the command includes an indication of the first format and an indication of the second format ([0021] instruction 102 comprises an operation code (opcode) and (optionally, depending on the opcode) one or more operands. The opcode specifies the operation for the execution unit 108 of processor 104 to carry out. One or more of the operands may specify source locations of data to operate on, or control settings or characteristics (e.g., formatting) of the data to operate on. The source operands may be applied to a fetch unit 106 of the processor 104 for retrieval of values from registers or other locations in machine memory (cache, bulk main memory, etc.). One or more of the operands may specify a destination location for returning results on executing the opcode operation on the source operands; The operation specified by the opcode is an indication of the second format that the data is to be transformed to. In this case the data is being transformed to Kim’s 8 bit format. The operand for the formatting of the data to operate on corresponds to Kim’s 32 bit format (i.e., the first format).); The motivation for combining is based on the same rational presented for rejection of independent claim 1. Kim and Hari et al. do not appear to explicitly teach “store the one or more second parameters and the one or more third parameters.” However, Liu et al. disclose: store the one or more second parameters and the one or more third parameters ([0061] the storage step S640, the storage unit 540 stores the quantized neural network). The motivation for combining is based on the same rational presented for rejection of independent claim 1. Regarding claim 40, Kim, Hari et al., and Lui et al. do not appear to explicitly teach "prioritizing generating the one or more second parameters over generating the one or more third parameters based on obtaining the first command before obtaining the second command." However, one or ordinary skill in the art before the effective filing date would prioritize a first command received before a second command. Such a procedure is similar to a first in first out procedure. Additionally, there could a significant time delay between receiving the two commands and it would be inefficient to wait for the second command. Therefore, the combination of Kim, Hari et al., and Lui et al. disclose: The system of claim 39, wherein the controller is further configured to: prioritize generating the one or more second parameters over generating the one or more third parameters based on obtaining the first command before obtaining the second command. Regarding claim 42, Kim further discloses: The system of claim 39, wherein the controller is further configured to: …storing the one or more first parameters to the one or more memory devices ([0079] second DRAM 620 to the fourth DRAM 640 may provide the sparse data to the first DRAM 610 through a bonding wire, and the first accelerator 650 may perform the primary operation on the sparse data received from the first DRAM 610 to the fourth DRAM 640 (i.e., the parameters have been stored in the memory devices)). Kim does not appear to explicitly teach “receive the one or more first parameters from the one or more host systems.” However, Liu et al. further disclose: receive the one or more first parameters from the one or more host systems ([0031] the user (i.e., host) can input for example neural networks to be quantized, specific task processing information (e.g. object detection task), etc., via the input device 150, wherein the neural networks to be quantized include for example various weights (e.g. floating-point weights)); Regarding claim 43, Kim discloses: An apparatus, comprising: means for obtaining, by a memory apparatus (FIG. 3 CXL Device 220) and from one or more host systems (FIG. 3 Host 210), a command indicating that one or more first parameters associated with a full precision dataset are to be modified (FIG. 9 step 910 Determine to operate based on first mode in which only first accelerator is activated; [0044] CXL controller 250 may receive an instruction and a configuration from the host 210…; [0084] the CXL device 220 may determine to activate only the first accelerator 310 in order to lighten parameters of a neural network. For another example, the CXL device 220 may determine to activate only the first accelerator 310 in advance to convert sparse data into dense data…Specifically, a mode setting circuit 330 of the CXL device 220 may determine to operate in the first mode in which only the first accelerator 310 is activated, and generate a control signal for instructing the first mode. The mode setting circuit 330 may provide the control signal to each of the first accelerator 310) from a first format to a second format ([0057] first to Nth operation circuits 311 to 31N included in the first accelerator 310 may include operation circuits for performing various lightening methods of reducing the size of the deep learning model. For example, the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight (i.e., parameter) expressed as a 32 bit floating point (i.e., first format) into a 16 bit floating point or an 8 bit integer type (i.e., second format)),… means for obtaining, by the memory apparatus and based on obtaining the command, the one or more first parameters from the one or more source addresses, the one or more first parameters having the first format ([0050] first accelerator 310 may receive data from the plurality of DRAMs 231, 233, 235, and 237 and perform a primary operation based on the received data; [0057] first to Nth operation circuits 311 to 31N included in the first accelerator 310 may include operation circuits for performing various lightening methods of reducing the size of the deep learning model. For example, the Nth operation circuit 31N may correspond to an operation circuit for performing quantization that converts a weight (i.e., parameter) expressed as a 32 bit floating point (i.e., first format) into a 16 bit floating point or an 8 bit integer type (i.e., second format)); means for generating, by the memory apparatus and based on the one or more first parameters, one or more second parameters associated with the full precision dataset, the one or more second parameters having the second format (FIG. 9 step 920 Perform acceleration on requested data by using first accelerator inside DRAM package; [0057]); and Kim does not appear to explicitly teach “the command indicating one or more source addresses and one or more destination addresses, wherein the command includes an indication of the first format and an indication of the second format…means for storing, by the memory apparatus, the one or more second parameters to the one or more destination addresses.” However, Hari et al. disclose: the command indicating one or more source addresses and one or more destination addresses, wherein the command includes an indication of the first format and an indication of the second format ([0021] instruction 102 comprises an operation code (opcode) and (optionally, depending on the opcode) one or more operands. The opcode specifies the operation for the execution unit 108 of processor 104 to carry out. One or more of the operands may specify source locations of data to operate on, or control settings or characteristics (e.g., formatting) of the data to operate on. The source operands may be applied to a fetch unit 106 of the processor 104 for retrieval of values from registers or other locations in machine memory (cache, bulk main memory, etc.). One or more of the operands may specify a destination location for returning results on executing the opcode operation on the source operands; The operation specified by the opcode is an indication of the second format that the data is to be transformed to. In this case the data is being transformed to Kim’s 8 bit format. The operand for the formatting of the data to operate on corresponds to Kim’s 32 bit format (i.e., the first format).); The motivation for combining is based on the same rational presented for rejection of independent claim 1. Kim and Hari et al. do not appear to explicitly teach “means for storing, by the memory apparatus, the one or more second parameters to the one or more destination addresses.” However, Liu et al. disclose: means for storing, by the memory apparatus, the one or more second parameters to the one or more destination addresses ([0061] the storage step S640, the storage unit 540 stores the quantized neural network). The motivation for combining is based on the same rational presented for rejection of independent claim 1. Regarding claim 44, Kim further discloses: The apparatus of claim 43, further comprising: …means for storing the one or more first parameters to the memory apparatus, wherein obtaining the command indicating that the one or more first parameters are to be modified is based on storing the one or more first parameters ([0079] second DRAM 620 to the fourth DRAM 640 may provide the sparse data to the first DRAM 610 through a bonding wire, and the first accelerator 650 may perform the primary operation on the sparse data received from the first DRAM 610 to the fourth DRAM 640 (i.e., the parameters have been stored in the memory devices)). Kim does not appear to explicitly teach “means for receiving the one or more first parameters from the one or more host systems.” However, Liu et al. further disclose: means for receiving the one or more first parameters from the one or more host systems ([0031] the user (i.e., host) can input for example neural networks to be quantized, specific task processing information (e.g. object detection task), etc., via the input device 150, wherein the neural networks to be quantized include for example various weights (e.g. floating-point weights)); Regarding claim 45, Kim further discloses: The apparatus of claim 43, further comprising: means for providing, based on storing the one or more second parameters, the one or more second parameters to the one or more host systems (FIG. 9 step S930 Bypass second accelerator inside CXL controller and provide result of performing acceleration operation to host). Regarding claim 47, Hari et al. further disclose: The apparatus of claim 43, wherein the one or more source addresses comprise one or more physical source addresses and the one or more destination addresses comprise one or more physical destination addresses ([0021] One or more of the operands may specify source locations of data to operate on, or control settings or characteristics (e.g., formatting) of the data to operate on. The source operands may be applied to a fetch unit 106 of the processor 104 for retrieval of values from registers or other locations in machine memory (cache, bulk main memory, etc.). One or more of the operands may specify a destination location for returning results on executing the opcode operation on the source operands). Claims 4-5, 38, and 46 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, Hari et al., and Liu et al., as applied to claim 3 above, and further in view of Cairello et al. (US 2023/0046535). Regarding claim 4, Kim, Hari et al., and Liu et al. do not appear to explicitly teach while Cairello et al. disclose: The system of claim 3, wherein, to provide the one or more second parameters to the one or more host systems, the controller is configured to: set a value of a completion flag based on storing the one or more second parameters to the one or more destination addresses ([0045] The memory device 230 may maintain (e.g., store and manage the value of) a flag 225 (e.g., a completion flag), where the flag 225 may indicate whether any one of the planes 265 of memory device 230 is associated with a completed access operation. In some cases, the flag 225 may include a single bit associated with the memory device 230, which may be set to indicate a status of the memory device 230. For example, in response to the memory device 230 completing an access operation at any plane 265, the memory device 230 may set the flag 225 (e.g., to a first value such as a logical 1, a logical 0, an active high, or an active low) to indicate that at least one plane 265 is associated with a completed access operation)); obtain, from the one or more host systems and based on setting the value of the completion flag, one or more read commands for the one or more destination addresses ([0046] Based on the flag 225 indicating that at least one access operation has been completed, the memory system controller 215 may poll the register 245 to identify which of the planes 265 is associated with a completed access operation. Subsequently, the memory system controller 215 may transmit a command to any of the planes 265 associated with the completed access operations); and transmit, based on obtaining the one or more read commands, the one or more second parameters from the one or more destination addresses to the one or more host systems ([0046] In response to receiving the command from the memory system controller 215, the memory device 230 may transmit data corresponding to the access operation to the memory system controller 215 (e.g., via a direct memory access operation)). Kim, Hari et al., Liu et al., and Cairello et al. are analogous art because Kim teach converting sparse data into dense data in a memory system; Hari et al. teach computer instructions; Liu et al. teach processing a generating quantized neural network; and Cairello et al. teach completion flags for memory operations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Kim, Hari et al., Liu et al., and Cairello et al. before him/her, to modify the combined teachings of Kim, Hari et al., and Liu et al. with the Cairello et al. teachings of a method for polling completion flags because doing so may reduce signaling overhead and increase possible parallelism of related signals (Cairello et al. [0012]). Regarding claim 5, Kim, Hari et al., and Liu et al. do not appear to explicitly teach while Cairello et al. disclose: The system of claim 3, wherein, to provide the one or more second parameters to the one or more host systems, the controller is configured to: transmit, to the one or more host systems and based on storing the one or more second parameters to the one or more destination addresses, an indication that the one or more second parameters are generated ([0046] The memory system controller 215 may poll the flag 225 (e.g., periodically, opportunistically, or in response to a command) to identify whether the flag indicates completion of at least one access operation at the memory device 230. Based on the flag 225 indicating that at least one access operation has been completed, the memory system controller 215 may poll the register 245 to identify which of the planes 265 is associated with a completed access operation); obtain, from the one or more host systems and based on transmitting the indication, one or more read commands for the one or more destination addresses ([0046] Based on the flag 225 indicating that at least one access operation has been completed, the memory system controller 215 may poll the register 245 to identify which of the planes 265 is associated with a completed access operation. Subsequently, the memory system controller 215 may transmit a command to any of the planes 265 associated with the completed access operations); and transmit, based on obtaining the one or more read commands, the one or more second parameters from the one or more destination addresses to the one or more host systems ([0046] In response to receiving the command from the memory system controller 215, the memory device 230 may transmit data corresponding to the access operation to the memory system controller 215 (e.g., via a direct memory access operation)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Kim, Hari et al., Liu et al., and Cairello et al. before him/her, to modify the combined teachings of Kim, Hari et al., and Liu et al. with the Cairello et al. teachings of a method for polling completion flags because doing so may reduce signaling overhead and increase possible parallelism of related signals (Cairello et al. [0012]). Regarding claim 38, Kim, Hari et al., and Liu et al. do not appear to explicitly teach while Cairello et al. disclose: The method of claim 37, wherein providing the one or more second parameters to the one or more host systems comprises: setting a value of a completion flag based on storing the one or more second parameters to the one or more destination addresses ([0045] The memory device 230 may maintain (e.g., store and manage the value of) a flag 225 (e.g., a completion flag), where the flag 225 may indicate whether any one of the planes 265 of memory device 230 is associated with a completed access operation. In some cases, the flag 225 may include a single bit associated with the memory device 230, which may be set to indicate a status of the memory device 230. For example, in response to the memory device 230 completing an access operation at any plane 265, the memory device 230 may set the flag 225 (e.g., to a first value such as a logical 1, a logical 0, an active high, or an active low) to indicate that at least one plane 265 is associated with a completed access operation)); obtaining, from the one or more host systems and based on setting the value of the completion flag, one or more read commands for the one or more destination addresses ([0046] Based on the flag 225 indicating that at least one access operation has been completed, the memory system controller 215 may poll the register 245 to identify which of the planes 265 is associated with a completed access operation. Subsequently, the memory system controller 215 may transmit a command to any of the planes 265 associated with the completed access operations); and transmitting, based on obtaining the one or more read commands, the one or more second parameters from the one or more destination addresses to the one or more host systems ([0046] In response to receiving the command from the memory system controller 215, the memory device 230 may transmit data corresponding to the access operation to the memory system controller 215 (e.g., via a direct memory access operation)). The motivation for combining is based on the same rational presented for rejection of claim 4. Regarding claim 46, Kim, Hari et al., and Liu et al. do not appear to explicitly teach while Cairello et al. disclose: The apparatus of claim 45, wherein the means for providing the one or more second parameters to the one or more host systems comprise: means for setting a value of a completion flag based on storing the one or more second parameters to the one or more destination addresses ([0045] The memory device 230 may maintain (e.g., store and manage the value of) a flag 225 (e.g., a completion flag), where the flag 225 may indicate whether any one of the planes 265 of memory device 230 is associated with a completed access operation. In some cases, the flag 225 may include a single bit associated with the memory device 230, which may be set to indicate a status of the memory device 230. For example, in response to the memory device 230 completing an access operation at any plane 265, the memory device 230 may set the flag 225 (e.g., to a first value such as a logical 1, a logical 0, an active high, or an active low) to indicate that at least one plane 265 is associated with a completed access operation)); means for obtaining, from the one or more host systems and based on setting the value of the completion flag, one or more read commands for the one or more destination addresses ([0046] Based on the flag 225 indicating that at least one access operation has been completed, the memory system controller 215 may poll the register 245 to identify which of the planes 265 is associated with a completed access operation. Subsequently, the memory system controller 215 may transmit a command to any of the planes 265 associated with the completed access operations); and means for transmitting, based on obtaining the one or more read commands, the one or more second parameters from the one or more destination addresses to the one or more host systems ([0046] In response to receiving the command from the memory system controller 215, the memory device 230 may transmit data corresponding to the access operation to the memory system controller 215 (e.g., via a direct memory access operation)). The motivation for combining is based on the same rational presented for rejection of claim 4. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, Hari et al., and Liu et al., as applied to claim 1 above, and further in view of Roberts et al. (US 2025/0036284). Regarding claim 7, Kim, Hari et al., and Liu et al. do not appear to explicitly teach while Roberts et al. disclose: The system of claim 1, wherein the one or more source addresses comprise one or more virtual source addresses and the one or more destination addresses comprise one or more virtual destination addresses ([0036] a mapping (e.g., an L2P table) to include a relation between one or more logical addresses of the data set and the destination address and remove a relation between one or more logical addresses of the data set and the source address; i.e., physical addresses of source and destination devices can be represented by logical addresses), and wherein the controller is further configured to: map the one or more virtual source addresses to one or more physical source addresses based on a mapping between one or more virtual addresses and one or more physical addresses ([0036]). Kim, Hari et al., Liu et al., and Roberts et al. are analogous art because Kim teach converting sparse data into dense data in a memory system; Hari et al. teach computer instructions; Liu et al. teach processing a generating quantized neural network; and Roberts et al. teach data transfer commands. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Kim, Hari et al., Liu et al., and Roberts et al. before him/her, to modify the combined teachings of Kim, Hari et al., and Liu et al. with the Roberts et al. teachings of including both source and destination addresses in commands because doing so informs the controller of where to find the data and where to store the data. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, Hari et al., Liu et al., and Roberts et al. as applied to claim 7 above, and further in view of Tiwari et al. (US 2024/0281383). Regarding claim 8, Kim, Hari et al., and Liu et al. do not appear to explicitly teach while Tiwari et al. disclose: The system of claim 7, wherein the controller is further configured to: store the mapping to a buffer of the controller ([0012] A memory system may utilize a buffer (e.g., volatile memory) to store portions of a L2P address translation table; [0024] The memory system controller 115 may include hardware such as one or more integrated circuits or discrete components, a buffer memory). Kim, Hari et al., Liu et al., Roberts et al., and Tiwari et al. are analogous art because Kim teach converting sparse data into dense data in a memory system; Hari et al. teach computer instructions; Liu et al. teach processing a generating quantized neural network; Roberts et al. teach data transfer commands; and Tiwari et al. teach storing L2P mappings. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Kim, Hari et al., Liu et al., Roberts et al., and Tiwari et al. before him/her, to modify the combined teachings of Kim, Hari et al., Liu et al., and Roberts et al. with the Tiwari et al. teachings of storing the mapping to the buffer of the controller because doing so would decrease the latency of accessing the mapping. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, Hari et al. Liu et al. as applied to claim 3 above, and further in view of Shao et al. (US 2025/0117661). Regarding claim 13, Kim does not appear to explicitly teach “generate one or more third parameters associated with the full precision dataset based on executing the full precision dataset using the one or more second parameters.” However, Shao et al. disclose: The system of claim 1, wherein the controller is further configured to: generate one or more third parameters associated with the full precision dataset based on executing the full precision dataset using the one or more second parameters (FIG. 7 adaptive quantization fine-tuning process 700; [0124] This fine-tuning portion of the iterative training process 310 can iterate for several cycles, such as until the error is decreased to a targeted value and/or until a maximum training cycle is achieved); and Kim, Hari et al., and Shao et al. do not appear to explicitly teach "store, based on generating the one or more third parameters, the one or more third parameters to the one or more memory devices." However, Liu et al. further disclose: store, based on generating the one or more third parameters, the one or more third parameters to the one or more memory devices ([0061] the storage step S640, the storage unit 540 stores the quantized neural network). Kim, Hari et al., Liu et al., and Shao et al. are analogous art because Kim teach converting sparse data into dense data in a memory system; Hari et al. teach computer instructions; Liu et al. teach processing a generating quantized neural network; and Shao et al. teach data quantization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Kim, Hari et al., Liu et al., and Shao et al. before him/her, to modify the combined teachings of Kim, Hari et al., and Liu et al. with the Shao et al. teachings of generating a third parameter using the second parameter because the adaptive quantization fine-tuning process may decrease the error to a targeted value and/or maximize the training cycle. Claims 29 and 41 are rejected under 35 U.S.C. 103 as being unpatentable over Kim ('871), Hari et al., and Liu et al. as applied to claim 27 above, and further in view of Kim et al. (US 2023/0168921). Regarding claim 29, Kim (‘871), Hari et al., and Liu et al. do not appear to explicitly teach while Kim (‘921) disclose: The method of claim 27, further comprising: prioritizing generating the one or more second parameters over generating the one or more third parameters based on a first priority metric indicated by the first command and based on a second priority metric indicated by the second command ([0140] Among the weight data, one or more weight data having a relatively large size may be quantized with priority). Kim ('871), Hari et al., Liu et al., and Kim ('921) are analogous art because Kim ('871) teach converting sparse data into dense data in a memory system; Hari et al. teach computer instructions; Liu et al. teach processing a generating quantized neural network; and Kim ('921) teach neural network processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Kim ('871), Hari et al., Liu et al., Kim ('921) before him/her, to modify the combined teachings of Kim ('871), Hari et al., and Liu et al. with Kim's ('921) teachings of including a priority parameter in the commands because doing so would inform the controller which command to execute first. Regarding claim 41, Kim (‘871), Hari et al., and Liu et al. do not appear to explicitly teach while Kim (‘921) disclose: The system of claim 39, wherein the controller is further configured to: prioritize generating the one or more second parameters over generating the one or more third parameters based on a first priority metric indicated by the first command and based on a second priority metric indicated by the second command ([0140] Among the weight data, one or more weight data having a relatively large size may be quantized with priority). The motivation for combining is based on the same rational presented for rejection of claim 29. Allowable Subject Matter Claim 9 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims as discussed in the Non-Final Office Action mailed September 26, 2025. Response to Arguments Applicant’s arguments, filed December 15, 2025, with respect to the rejection(s) of claims under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Kim, Hari et al., and Liu et al. based on applicant’s amendment to the claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRACY A WARREN whose telephone number is (571)270-7288. The examiner can normally be reached M-Th 7:30am-5pm, Alternate F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan P. Savla can be reached at 571-272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TRACY A WARREN/Primary Examiner, Art Unit 2137
Read full office action

Prosecution Timeline

Jul 25, 2024
Application Filed
Sep 24, 2025
Non-Final Rejection — §103
Nov 04, 2025
Interview Requested
Nov 24, 2025
Applicant Interview (Telephonic)
Dec 02, 2025
Examiner Interview Summary
Dec 15, 2025
Response Filed
Mar 02, 2026
Final Rejection — §103
Mar 31, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602174
SEMICONDUCTOR DEVICE, COMPUTING SYSTEM, AND DATA COMPUTING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12578855
REMOTE POOLED MEMORY DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12578887
BOOT PROCESS TO IMPROVE DATA RETENTION IN MEMORY DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12572312
MEMORY DEVICE OPERATION BASED ON DEVICE CHARACTERISTICS
2y 5m to grant Granted Mar 10, 2026
Patent 12572306
VERIFYING CHUNKS OF DATA BASED ON READ-VERIFY COMMANDS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
88%
With Interview (+6.0%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 422 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month