DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 02/03/2023 and 07/15/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: Reference character 706 of fig.7 is not mentioned in the Specification. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to under 37 CFR 1.83(a) because they fail to show the reference number 128 of fig.1 and as described in para. [0025] of the specification. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to under 37 CFR 1.83(a) because they fail to show reference number 24 of fig. 1 as described by paras. [0034] and [0035] of the specification. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities:
Para. 0022: details the command decoder as 110 when in fact fig. 1 details the command decoder as 106
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-11 and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al US 2020/0075099 Al(“Choi”) in view of SON et al. US 2017/0076768 Al (“Son”) and further in view of Si, Xin, et al. "A dual-split 6T SRAM-based computing-in-memory unit-macro with fully parallel product-sum operation for binarized DNN edge processors." IEEE Transactions on Circuits and Systems I: Regular Papers 66.11 (2019)(“Si”).
Regarding claim 1, Choi teaches a method comprising:
writing data indicative of a pattern to a register of the memory(Choi, para., 0031, see also fig. 1, “Also shown in FIG. 1 is circuit 130 comprising a detailed conceptional view of a CAM. Input search data 120 comprises an n-bit search word in this example, which is held in a search data register for presentation on search lines[writing data indicative of a pattern to a register of the memory].”);
and reading a result of the pattern matching operation from the memory(Choi, paras., 0040-41, see also fig. 3, “As search data is presented on the search lines via search data registers/drivers 330, results can be monitored via rows of match lines (ML) fed into sense
amplifiers 341-344. Encoder 350 can present a result based on which match line produces a hit. This result comprises match location 302 or match address. Control circuitry 360 is configured to write data into CAM 310, present input data as search words 301 to search CAM 310, and read out match locations 302[and reading a result of the pattern matching operation from the memory].”).
[issuing a command to the memory that causes the memory to perform a pattern matching operation based at least in part on the data from the machine learning data set written to the at least one memory cell] and the data indicative of the pattern written to the register of the memory(Choi, para., 0031, see also fig. 1, “Also shown in FIG. 1 is circuit 130 comprising a detailed conceptional view of a CAM. Input search data 120 comprises an n-bit search word in this example, which is held in a search data register for presentation on search lines[and the data indicative of the pattern written to the register of the memory].”).1
Choi does not teach: issuing a command to the memory that causes the memory to perform a pattern matching operation.
However, Son teaches:
issuing a command to the memory that causes the memory to perform a pattern matching operation [based at least in part on the data from the machine learning data set written to the at least one memory cell and the data indicative of the pattern written to the register of the memory](Son, paras. 0103-0109, see also figs. 6, 9 and 10, “A pattern write command and an address signal are received from the memory controller 200 (S120). In response to the pattern write command[issuing a command to the memory]... [w]hen a write request, address information ADD, and data DATA are received from the host, the data comparison unit 210 may compare the data DATA with the data pattern stored in the pattern buffer 220...the data comparison unit 210 may compare the data DATA with the plurality of data patterns Pattern 1 to Pattern m. When the data DATA matches a data pattern among the plurality of data patterns, the memory controller 200 may generate the pattern write command PW corresponding to the matching data pattern[that causes the memory to perform a pattern matching operation].”).2
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi with the teachings of Son the motivation to do so would be to design a memory system that is capable to doing in-memory processing without requiring an extraneous circuit to do so(Son, para. 0002, “[T]he inventive concept relate to a semiconductor memory device, and more particularly, to a memory device, a memory module, and a memory system capable of writing preset data to a memory cell array without using an input/output circuit when a write command is received.”).
Choi in view of Son does not teach: based at least in part on the data from the machine learning data set written to the at least one memory cell; writing data from a machine learning data set to at least one memory cell of a memory.
However, Si teaches:
[issuing a command to the memory that causes the memory to perform a pattern matching operation] based at least in part on the data from the machine learning data set written to the at least one memory cell [and the data indicative of the pattern written to the register of the memory](Si, pg., 4, see also, fig. 3, Table 1 and Table II, “In XNORNN mode, multiple rows are activated at the same time, and each input data (IN)[ based at least in part on the data from the machine learning data set] is pre-encoded to two wordlines (WLL and WLR)... [f]ig. 3(b) presents the detailed waveform of SRAM-CIM in XNOR operation. When an input (IN[i ]) = ‘ + 1’, the corresponding WLL(WLL[i ]) is asserted as ‘1’ and the corresponding WLR(WLR[i ]) is asserted as ‘0’[written to the at least one memory cell].”);3
writing data from a machine learning data set to at least one memory cell of a memory(Si, pg., 4, see also, fig. 3, Table 1 and Table II, “In XNORNN mode, multiple rows are activated at the same time, and each input data (IN)[data from a machine learning data set] is pre-encoded to two wordlines (WLL and WLR)... [f]ig. 3(b) presents the detailed waveform of SRAM-CIM in XNOR operation. When an input (IN[i ]) = ‘ + 1’, the corresponding WLL(WLL[i ]) is asserted as ‘1’ and the corresponding WLR(WLR[i ]) is asserted as ‘0’[writing to at least one memory cell of a memory].”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi in view of Son with the teachings of Si the motivation to do so would be to implement a neural network within memory to avoid the memory bottleneck faced by traditional computing as it relates to transferring data, weights and partial results to the processor; significantly improving inferencing(Si, pgs., 1-2, “[C]onventional all digital solutions have been unable to resolve the memory bottleneck. In conventional all digital solutions, process engine (PE) arrays typically exploit parallelized computation; however, they suffer from inefficient single-row SRAM access to weights, and larger SRAM arrays are required to store a huge amounts of intermediate data, as shown in Fig. 1(b). Furthermore, the energy required to access data from memory can far exceed the energy required for computing operations using that data... we implemented fully parallel product-sum
operations within an SRAM cell array to improve performance in terms of area, energy efficiency, and yield against variations in data-pattern and transistor performance...[a] 65nm 4Kb algorithm-dependent SRAM-CIM unit-macro for XNOR neural network (XNORNN) and modified binary neural network (MBNN) was implemented.”).
Regarding claim 2, Choi in view of Son and Si teaches the method of claim 1, wherein the command indicates a type of result to be returned from the pattern matching operation(Son, paras. 0113-0114, see also fig. 11, “When a write request, data, and address information are received from the host (S220), the memory controller 200 may compare the received data with the predefined data pattern (S230). When the received data matches the
predefined data pattern, the memory controller 200 may transmit a pattern write command and an address signal to the memory device 100 (S240)[ wherein the command indicates a type of result to be returned from the pattern matching operation].”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi with the above teachings of Son for the same rationale stated at Claim 1.
Regarding claim 3, Choi in view of Son and Si teaches the method of claim 2, wherein the type of result is a number of times the pattern occurs in the data(Choi, paras. 0038-39, see also fig. 2, Associative memory 220 is updated to hold frequent results from processing input...[t]his updating can be performed based on various criteria, such as when results are similar to previous results, or using every result during an initialization period until the associative memory fills to capacity. A subsequent hit can indicate to allow resultant data to remain in the associative memory, and data with few hits associated therewith can be replaced with new results[is a number of times the pattern occurs in the data].”).
Regarding claim 4, Choi in view of Son and Si teaches the method of claim 2, wherein the type of result is a location where the pattern occurs in the data(Son, paras. 0113-0114, see also fig. 11, “When the received data matches the predefined data pattern, the memory controller 200 may transmit a pattern write command and an address signal to the memory device 100 (S240)[ is a location where the pattern occurs in the data].”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi with the above teachings of Son for the same rationale stated at Claim 1.
Regarding claim 5, Choi in view of Son and Si teaches the method of claim 1, further comprising training a machine learning model based, at least in part, on the machine learning data set and adjusting a weight of the machine learning model(Si, pgs., 4-5, see also table I and table II, “In XNORNN mode[training a machine learning model], multiple rows are activated at the same time, and each input data (IN) is pre-encoded to two wordlines (WLL and WLR)[ based, at least in part, on the machine learning data set]. The weights (W) of m-weight FCNL are stored in consecutive mDSC6T memory cells (MC) in the same column... a SRAM cell stores the weight “+1”, its storage node (Q) stores logic “1”. When a dual split control (DSC6T) cell stores the weight “−1”, the corresponding Q = 0 and QB = 1. Table II presents a truth table of the XNOR operation[and adjusting a weight of the machine learning model].”) based at least in part on the result of the pattern matching operation(Choi, paras., 0040-41, see also fig. 3, “Encoder 350 can present a result based on which match line produces a hit. This result comprises match location 302 or match address. Control circuitry 360 is configured to write data into CAM 310, present input data as search words 301 to search CAM 310, and read out match locations 302[based at least in part on the result of the pattern matching operation].”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi in view of Son with the above teachings of Si for the same rationale stated at Claim 1.
Regarding claim 6, Choi in view of Son and Si teaches the method of claim 1, further comprising pre-processing a data set removing a portion of the data set based at least in part on the result of the pattern matching operation(Son, para. 0139, “When data is stored in all of the storage areas of the first pattern buffer PBUFl and new data, e.g., the (N+l)-th data (DataN + 1 ), is received, the memory controller 200 may update a data pattern of the first pattern buffer PBUFl by deleting data that was matched the earliest, e.g., data having a matching order MN of N, e.g., Datal, and storing the (N+l)-th data (DataN+l)[ pre-processing a data set removing a portion of the data set based at least in part on the result of the pattern matching operation].” ).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi with the above teachings of Son for the same rationale stated at Claim 1.
Regarding claim 7, Choi in view of Son and Si teaches the method of claim 6, further comprising training a machine learning model(Si, pgs., 4-5, see also table I and table II, “In XNORNN mode[training a machine learning model], multiple rows are activated at the same time, and each input data (IN) is pre-encoded to two wordlines (WLL and WLR.”) based, at least in part, on the data set with the portion removed(Son, para. 0139, “When data is stored in all of the storage areas of the first pattern buffer PBUFl and new data, e.g., the (N+l)-th data (DataN + 1 ), is received, the memory controller 200 may update a data pattern of the first pattern buffer PBUFl by deleting data that was matched the earliest, e.g., data having a matching order MN of N, e.g., Datal, and storing the (N+l)-th data (DataN+l)[based, at least in part, on the data set with the portion removed].”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi with the above teachings of Son and Si for the same rationale stated at Claim 1.
Regarding claim 8, Choi in view of Son and Si teaches the method of claim 1, further comprising generating an inference from the data based at least in part on the result of the pattern matching operation, wherein the inference is provided to a user(Choi, para. 0083, “In some examples, TCAM R/W service 824 can implement output encoder/decoders or multiplexer logic to assemble discrete search result values into bit vectors or to handle multiple search match outputs produced by a TCAM. TCAM R/W service 824 can transmit resultant search match/mismatch indications[generating an inference from the data based at least in part on the result of the pattern matching operation] to one or more further systems over communication interface 807, or present the resultant search match/mismatch indication to one or more users over user interface system 808[wherein the inference is provided to a user].”).
Regarding claim 9, Choi in view of Son and Si teaches the method of claim 1, wherein the pattern is provided from the machine learning data set(Choi, paras. 0036-0037, see also fig. 2, “FIG. 2 is presented which illustrates an example system 200 for application of a CAM into a neural network implementation, which can provide enhanced operation of a GPU-implemented neural network or machine learning task processor[machine learning]...[i]n FIG. 2, inputs 201[machine learning dataset] are presented to a pipeline of FPU stages 211-215 of FPU 210 within a corresponding GPGPU. A floating-point result (
Q
F
P
U
) results from processing the inputs through the FPU pipeline. Inputs 201 are also concurrently presented to TCAM 221 as input search data. When a search is successful in TCAM 221 (e.g. a search hit), then hit indicator signal 223 is presented to pipeline control circuit 216[wherein the pattern is provided from the machine learning dataset].”).
Regarding claim 10, Choi in view of Son and Si teaches the method of claim 1, further comprising performing a machine learning operation and generating a first result, wherein the pattern is provided from the first result(Choi, paras. 0036-0037, see also fig. 2, “FIG. 2 is presented which illustrates an example system 200 for application of a CAM into a neural network implementation, which can provide enhanced operation of a GPU-implemented neural network or machine learning task processor...[i]n FIG. 2, inputs 201 are presented to a pipeline of FPU stages 211-215 of FPU 210 within a corresponding GPGPU. A floating-point result (
Q
F
P
U
) results from processing the inputs through the FPU pipeline[performing a machine learning operation and generating a first result]...[o]nce the hit result produces a data output from the memory associated with TCAM 221, then a corresponding result (
Q
A
M
) is provided as an output of the pipeline[the pattern is provided from the first result] instead of the (
Q
F
P
U
) result.”).
Regarding claim 11, Choi in view of Son and Si teaches the method of claim 10, wherein the machine learning operation is performed by processing circuitry(Choi, paras. 0036-0037, see also fig. 2, “FIG. 2 is presented which illustrates an example system 200[] for application of a CAM into a neural network implementation, which can provide enhanced operation of a GPU-implemented neural network or machine learning task processor[is performed by processing circuitry].”).
Regarding claim 14, Choi in view of Son and Si teaches the method of claim 1, wherein the pattern matching command is provided by a memory controller(Son, paras. 0113-0114, see also fig. 11, “When a write request, data, and address information are received from the host (S220), the memory controller 200 may compare the received data with the predefined data pattern (S230). When the received data matches the predefined data pattern, the memory controller 200 may transmit a pattern write command[is provided by a memory controller] and an address signal to the memory device 100 (S240).”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi with the above teachings of Son for the same rationale stated at Claim 1.
Regarding claim 15, Choi teaches a method comprising:
storing a pattern in a pattern register of the memory(Choi, para., 0031, see also fig. 1, “Also shown in FIG. 1 is circuit 130 comprising a detailed conceptional view of a CAM. Input search data 120 comprises an n-bit search word in this example, which is held in a search data register for presentation on search lines[storing a pattern in a pattern register of the memory].”).
Choi does not teach: receiving a pattern matching command; responsive to the pattern matching command, performing, with pattern matching circuitry of the memory, a pattern matching operation on the data and the pattern and generating a result; and providing the result to a memory controller.
However, Son teaches:
receiving a pattern matching command(Son, paras. 0113-0114, see also fig. 11, “Referring to FIG. 11, the memory controller 200 defines a data pattern (S210). The data pattern may be data that is determined in advance between the memory controller 200 and the memory device 100... [w]hen a write request, data, and address information are received from the host (S220), the memory controller 200 may compare the received data with the predefined data pattern (S230)[ receiving a pattern matching command]. When the received data matches the
predefined data pattern, the memory controller 200 may transmit a pattern write command and an address signal to the memory device 100 (S240).”);
responsive to the pattern matching command, performing, with pattern matching circuitry of the memory, a pattern matching operation on the data and the pattern and generating a result(Son, paras. 0113-0114, see also fig. 11, “Referring to FIG. 11, the memory controller 200 defines a data pattern (S210). The data pattern may be data that is determined in advance between the memory controller 200 and the memory device 100... [w]hen a write request, data, and address information are received from the host (S220)[ responsive to the pattern matching command], the memory controller 200 may compare the received data with the predefined data pattern (S230)[ performing, with pattern matching circuitry of the memory, a pattern matching operation on the data and the pattern]. When the received data matches the predefined data pattern, the memory controller 200 may transmit a pattern write command and an address signal to the memory device 100 (S240)[ and generating a result].”);
and providing the result to a memory controller(Son, paras., 0196-0201, see also fig. 26, “When the data matches the predefined data pattern, the memory device 100 may transmit a matching signal RDM and pattern information BPIF to the memory controller 200(S861).”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi with the teachings of Son the motivation to do so would be to design a memory system that is capable to doing in-memory processing without requiring an extraneous circuit to do so(Son, para. 0002, “[T]he inventive concept relate to a semiconductor memory device, and more particularly, to a memory device, a memory module, and a memory system capable of writing preset data to a memory cell array without using an input/output circuit when a write command is received.”).
Choi in view of Son does not teach: storing data from a machine learning data set in a memory cell array of a memory
However, Si teaches:
storing data from a machine learning data set in a memory cell array of a memory(Si, pg., 3-4, see also, fig. 3, Table 1 and Table II, “SRAM mode is activated to store the trained weight (write operation)[ storing data from a machine learning data set]... [i]n SRAM mode, only one row is activated for read and write operations... [t]he SRAM cell array can be accessed with both WLL and WLR on, which is like the read and write operations of a conventional SRAM[in a memory cell array of a memory]. In such situations, the trained weights are stored in the dualsplit-control 6T (DSC6T) cell array via a write operation in SRAM mode under nominal-VDD.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi in view of Son with the teachings of Si the motivation to do so would be to implement a neural network within memory to avoid the memory bottleneck faced by traditional computing as it relates to transferring data, weights and partial results to the processor; significantly improving inferencing(Si, pgs., 1-2, “[C]onventional all digital solutions have been unable to resolve the memory bottleneck. In conventional all digital solutions, process engine (PE) arrays typically exploit parallelized computation; however, they suffer from inefficient single-row SRAM access to weights, and larger SRAM arrays are required to store a huge amounts of intermediate data, as shown in Fig. 1(b). Furthermore, the energy required to access data from memory can far exceed the energy required for computing operations using that data... we implemented fully parallel product-sum
operations within an SRAM cell array to improve performance in terms of area, energy efficiency, and yield against variations in data-pattern and transistor performance...[a] 65nm 4Kb algorithm-dependent SRAM-CIM unit-macro for XNOR neural network (XNORNN) and modified binary neural network (MBNN) was implemented.”).
Regarding claim 16, Choi in view of Son and Si teaches the method of claim 15, further comprising storing the result in a result register(Choi, para. 0040, see also fig. 3, “As search data is presented on the search lines via search data registers/drivers 330, results can be monitored via rows of match lines (ML) fed into sense amplifiers 341-344[storing the result in a result register]. Encoder 350 can present a result based on which match line produces a hit. This result comprises match location 302 or match address.”).
Regarding claim 17, Choi in view of Son and Si teaches the method of claim 15, wherein the result is based, at least in part, on the pattern matching command(Son, paras. 0113-0114, see also fig. 11, “When a write request, data, and address information are received from the host (S220), the memory controller 200 may compare the received data with the predefined data pattern (S230). When the received data matches the predefined data pattern, the memory controller 200 may transmit a pattern write command[is based, at least in part, on the pattern matching command] and an address signal to the memory device 100 (S240).”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi with the above teachings of Son for the same rationale stated at Claim 15.
Regarding claim 18, Choi in view of Son and Si teaches the method of claim 17, wherein the result includes a number of times the pattern is present in the data(Choi, paras. 0038-39, see also fig. 2, Associative memory 220 is updated to hold frequent results from processing input...[t]his updating can be performed based on various criteria, such as when results are similar to previous results, or using every result during an initialization period until the associative memory fills to capacity. A subsequent hit can indicate to allow resultant data to remain in the associative memory, and data with few hits associated therewith can be replaced with new results[ a number of times the pattern is present in the data].”).
Regarding claim 19, Choi in view of Son and Si teaches the method of claim 17, wherein the result includes a location in the data where the pattern is present(Son, paras. 0113-0114, see also fig. 11, “When the received data matches the predefined data pattern, the memory controller 200 may transmit a pattern write command and an address signal to the memory device 100 (S240)[ a location in the data where the pattern is present].”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi with the above teachings of Son for the same rationale stated at Claim 15.
Regarding claim 20, Choi in view of Son and Si teaches the method of claim 15, further comprising enabling the pattern matching circuitry with a signal from a mode register(Son, para. 0059, see also fig. 2, “The control logic 120 may include a command decoder 121 and a mode register 122, and may control overall operations of the memory device 100... [t]he mode
register 122 may set an internal register, in response to the address signal ADDR and a mode register signal, to determine an operation mode of the memory device 100[enabling the pattern matching circuitry with a signal from a mode register].”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi with the above teachings of Son for the same rationale stated at Claim 15.
Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al US 2020/0075099 Al(“Choi”) in view of SON et al. US 2017 /0076768 Al (“Son”) and in view of Si, Xin, et al. "A dual-split 6T SRAM-based computing-in-memory unit-macro with fully parallel product-sum operation for binarized DNN edge processors." IEEE Transactions on Circuits and Systems I: Regular Papers 66.11 (2019)(“Si”) and further in view of Wang, Ying, et al. "ProPRAM: Exploiting the transparent logic resources in non-volatile memory for near data computing." Proceedings of the 52nd Annual Design Automation Conference. 2015(“Wang”)
Regarding claim 12, Choi in view of Son and Si teaches the method of claim 1, but do not teach: further comprising translating code of a machine learning application into the pattern matching command.
However, Wang teaches:
translating code of a machine learning application into the pattern matching command(Wang, pgs. 5-6, “Translating the instructions into the bus signals according to
the memory protocol needs another layer to turn the memory into a proactive device... [t]here are only two types of commands. One is to pass the decoded instructions and direct the config file from designated memory space into the FSM controller for DCW configuration. The other is to initiate the data trunk in the designated row-buffer...[t]he symbol stream continuously flows into the buffer entries for comparison as specified by the memory commands below[translating code of a machine learning application into the pattern matching command]:
XLOAD addr-1; //Load the first page (in addr-1) of reference stream into buffer-0
XLOAD addr-2; //Load the key (in addr-2) from memory to buffer-1 XWRT addr-c, addr-d, length, sym_size, match-sum; // Load the config data from addr-c to FSM, and trunk size and the symbol size into AGU
DCW Execution: For (j=0, j<length, j++){
Shift and Comp block-j of size sym_size to buffer-0
Increment the match bit in buffer-3 }
Store final result of buffer-3 to space of addr-d
”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Choi in view of Son and Si with the teachings of Wang the motivation to do so would be to design memory hardware to do in-memory processing for applications that do not fit the traditional memory transfer hierarchy(Wang, pgs., 2-3, “It is the emerging highly-parallel and big data applications that recently stimulated the research on PIM proposed decades ago. Compared to computation capability, the memory bandwidth and data movement are thought as the huge bottleneck in these new data-hungry workloads in the field of database, media and network data analysis...
ProPRAM is intended for such current massive-scale data processing workloads that are insensitive to computation capability...[f]or example, Key-Value operations dominate the
datacenter applications such as text processing, web searches, network routing and etc, and they can be covered by simple compare, search and sort primitives.”).
Regarding claim 13, Choi in view of Son, Si, and Wang teaches the method of claim 12, wherein the translating is performed by a memory application programming interface(Wang, pgs. 5-6, “Therefore, we adopt the control flow that’s commonly adopted in reconfigurable computing. There are only two types of commands. One is to pass the decoded instructions and direct the config file from designated memory space into the FSM controller for DCW configuration[is performed by a memory application programming interface].”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20150235703 A1(details a content addressable memory (CAM) circuit using nonvolatile memory to do in memory searching and pattern matching)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM C STANDKE whose telephone number is (571)270-1806. The examiner can normally be reached Gen. M-F 9-9PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Adam C Standke/
Primary Examiner
Art Unit 2129
1 Examiner Notes: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are
claim limitations that are not taught by the prior art of Choi.
2 Examiner Notes: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are
claim limitations that are not taught by the prior art of Son.
3 Examiner Notes: The claim limitations that are not in bold and contained within square brackets (i.e., [ ]) are
claim limitations that are not taught by the prior art of Si.