DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Other References: Chow (US 8553482) – sense amplifier and sense amplifier latch having common control.
Terminal Disclaimer Approved 1/29/2026.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1,6,7,8,13,14,15,20 are rejected under 35 U.S.C. 103 as being unpatentable over Walker (US 20110093662 A1) and in view of Bloomfield (US 20010036322 A1) and further in view of Chall (US 9220176)
Claim 1. Walker662 discloses A memory device (eg., Fig. 2 0027 – memory device 34), comprising:
an array of memory cells and a compute component in communication with the array of memory cells (eg., 0035 Fig. 2 - memory cells, such as banks 54a-54d of the memory array 36; compute engine 38 may be embedded on the memory device 34),
the array of memory cells configured to store operands (eg., 0006 - Data (e.g., the operands on which the instructions will be executed) may be stored in a memory device (e.g., a memory array)), and
wherein the compute component is configured to execute instructions by causing a logical operation to be performed on the operands (eg., 0030 - The compute engine 38 may be one example of an internal processor, and may include one or more arithmetic logic units (ALUs).).
Walker662 discloses does not disclose, but Bloomfield discloses
wherein performing the logical operation comprises performing one or more operational phases without transferring the operands to a host (eg., 0051 - PIM 44 also holds intermediate results as they are developed for use in subsequent processing and final results of processing.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the memory device with internal processor as disclosed by Walker662, with Bloomfield, providing the benefit of Individual specialized processing elements are explicitly embedded at the correct location in the data flow, and no system resources are required to distribute the data. Thus data pipelines, once set up, are virtually maintenance free, continuing to process image data without any further contact with the host PROCESSOR. (see Bloomfield, 0011).
Walker662 in view of Bloomfield discloses does not disclose, but Chall discloses
direct (eg., 0060 Fig. 5 - LPM 550 is coupled to the CCP 540 ; 0068 Fig. 5 - CCP 540 of a device can read and write from the memory of the device, including erasing EEPROM of the device. In another embodiment, the CCP 540 can directly address the memory of neighboring devices… LPM 550 or bulk memory can only be accessed by the local CCP 540).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the memory device with internal processor as disclosed by Walker662, with Bloomfield, providing the benefit of addressing the problem in the art of performance limitation caused by separating the processor and the memory on a motherboard is referred to as the von Neumann bottleneck (see Chall, 0007) delays in the processing of data caused by the respective data buses on the various motherboards are compounded, as processing service requests between processors are subject to delays on each motherboard that a service request is processed (col 0008) determining whether or not the service request can be processed locally (col 0024).
Claim 6. Walker662 discloses wherein the logical operation an addition function or a multiply function (eg., 0005 - ALU circuitry may add, subtract, multiply, or divide one operand from another, ).
Claim 7. Walker662 discloses wherein the memory device is a processor in memory (PIM) device (eg., memory may be a processor-in-memory (PIM)).
Claim 8. Walker662 discloses A method for operating a memory device (eg., Fig. 2 0027 – memory device 34), comprising:
executing, by a compute component in communication with an array of memory cells, instructions by causing a logical operation to be performed on, wherein performing the logical operation comprises: performing one or more operational phases (eg., 0035 Fig. 2 - memory cells, such as banks 54a-54d of the memory array 36; compute engine 38 may be embedded on the memory device 34),
operands stored in the array of memory cells (eg., 0006 - Data (e.g., the operands on which the instructions will be executed) may be stored in a memory device (e.g., a memory array)), and
wherein the compute component is configured to execute instructions by causing a logical operation to be performed on the operands (eg., 0030 - The compute engine 38 may be one example of an internal processor, and may include one or more arithmetic logic units (ALUs).).
Walker662 discloses does not disclose, but Bloomfield discloses
without transferring the operands to a host (eg., 0051 - PIM 44 also holds intermediate results as they are developed for use in subsequent processing and final results of processing.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the memory device with internal processor as disclosed by Walker662, with Bloomfield, providing the benefit of Individual specialized processing elements are explicitly embedded at the correct location in the data flow, and no system resources are required to distribute the data. Thus data pipelines, once set up, are virtually maintenance free, continuing to process image data without any further contact with the host PROCESSOR. (see Bloomfield, 0011).
Walker662 in view of Bloomfield discloses does not disclose, but Chall discloses
direct (eg., 0060 Fig. 5 - LPM 550 is coupled to the CCP 540 ; 0068 Fig. 5 - CCP 540 of a device can read and write from the memory of the device, including erasing EEPROM of the device. In another embodiment, the CCP 540 can directly address the memory of neighboring devices… LPM 550 or bulk memory can only be accessed by the local CCP 540).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the memory device with internal processor as disclosed by Walker662, with Bloomfield, providing the benefit of addressing the problem in the art of performance limitation caused by separating the processor and the memory on a motherboard is referred to as the von Neumann bottleneck (see Chall, 0007) delays in the processing of data caused by the respective data buses on the various motherboards are compounded, as processing service requests between processors are subject to delays on each motherboard that a service request is processed (col 0008) determining whether or not the service request can be processed locally (col 0024).
Claim 13 is rejected for reasons similar to Claim 6 above.
Claim 14 is rejected for reasons similar to Claim 7 above.
Claim 15. Walker662 discloses A memory device (eg., Fig. 2 0027 – memory device 34), comprising:
an array of memory cells; and a compute component in communication with the array of memory cells, wherein the compute component is configured to:
(eg., 0035 Fig. 2 - memory cells, such as banks 54a-54d of the memory array 36; compute engine 38 may be embedded on the memory device 34),
the array of memory cells configured to store operands (eg., 0006 - Data (e.g., the operands on which the instructions will be executed) may be stored in a memory device (e.g., a memory array)), and
execute instructions by causing a logical operation to be performed on operands stored in the array of memory cells (eg., 0030 - The compute engine 38 may be one example of an internal processor, and may include one or more arithmetic logic units (ALUs).).
Walker662 discloses does not disclose, but Bloomfield discloses
wherein, to perform the logical operation,
the compute component is configured to: perform one or more operational phases without transferring the operands to a host.
(eg., 0051 - PIM 44 also holds intermediate results as they are developed for use in subsequent processing and final results of processing.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the memory device with internal processor as disclosed by Walker662, with Bloomfield, providing the benefit of Individual specialized processing elements are explicitly embedded at the correct location in the data flow, and no system resources are required to distribute the data. Thus data pipelines, once set up, are virtually maintenance free, continuing to process image data without any further contact with the host PROCESSOR. (see Bloomfield, 0011).
Walker662 in view of Bloomfield discloses does not disclose, but Chall discloses
direct (eg., 0060 Fig. 5 - LPM 550 is coupled to the CCP 540 ; 0068 Fig. 5 - CCP 540 of a device can read and write from the memory of the device, including erasing EEPROM of the device. In another embodiment, the CCP 540 can directly address the memory of neighboring devices… LPM 550 or bulk memory can only be accessed by the local CCP 540).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the memory device with internal processor as disclosed by Walker662, with Bloomfield, providing the benefit of addressing the problem in the art of performance limitation caused by separating the processor and the memory on a motherboard is referred to as the von Neumann bottleneck (see Chall, 0007) delays in the processing of data caused by the respective data buses on the various motherboards are compounded, as processing service requests between processors are subject to delays on each motherboard that a service request is processed (col 0008) determining whether or not the service request can be processed locally (col 0024).
Claim 20 is rejected for reasons similar to Claim 6 above.
Claims 2-5, 9-12,16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Walker (US 20110093662 A1) and in view of Bloomfield (US 20010036322 A1) and Chall (cited above) and further in view of Walker(US 20100312999)
Claim 2. Walker662 in view of Bloomfield and Chall discloses does not disclose, but Walker999 discloses
wherein, to perform a first operational phase of the one or more operational phases, the compute component is configured to: obtain the operands from the array of memory cells; and store the operands in a latch of the compute component (eg., [0038] The compute buffer 126 may include one or more CBbytes 130, which may refer to a storage unit for each byte of information in the compute buffer 126. For example, the CBbyte 130 may be referred to as a CBbyte block, which may include a row or a chain of flops or latches,).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the memory device with internal processor as disclosed by Walker662, with Bloomfield, and Chall with Walker999, providing the benefit of steps of writing, reading, buffering, executing instructions, and storing results may occur substantially simultaneously on different instructions, or different parts of an instruction. This parallel processing, referred to as "pipelining," may improve processing performance in the electronic system (see Walker999, 0006).
Claim 3. Walker662 discloses
wherein the compute component is configured to: perform the logical operation on the operands based at least in part on performing the first operational phase (eg., 0032 - intermediate results may be stored in memory components such as the buffer 42 or memory registers coupled to the compute engine 38. In one or more embodiments, a compute engine 38 may access the buffer 42 for the intermediate results to perform subsequent operations).
Claim 4. Walker662 discloses
wherein, to perform a second operational phase of the one or more operational phases, the compute component is configured to: store a result of the logical operation in the (eg., 0033 - compute buffer, which may store data (e.g., operands) and instructions, and an instruction buffer, which may store instructions. The buffer 42 may also include additional buffers, such as a data buffer or a simple buffer, which may provide denser storage, and may store intermediate or final results of executed instructions).
Walker662 in view of Bloomfield and Chall discloses does not disclose, but Walker999 discloses
latch of the compute component (eg., [0038] The compute buffer 126 may include one or more CBbytes 130, which may refer to a storage unit for each byte of information in the compute buffer 126. For example, the CBbyte 130 may be referred to as a CBbyte block, which may include a row or a chain of flops or latches,).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the memory device with internal processor as disclosed by Walker662, with Bloomfield, and Chall with Walker999, providing the benefit of steps of writing, reading, buffering, executing instructions, and storing results may occur substantially simultaneously on different instructions, or different parts of an instruction. This parallel processing, referred to as "pipelining," may improve processing performance in the electronic system (see Walker999, 0006).
Claim 5. Walker662 discloses
wherein the compute component is configured to: transfer the result from the latch of the compute component to the array of memory cells (eg., [0035] For example, in one embodiment as depicted in a portion of a memory device 52 in FIG. 3, multiple transfers between the memory array 36 and the buffer 42 may occur substantially simultaneously by coupling one or more buffers 42a-42d to one or more groups of memory cells, such as banks 54a-54d of the memory array 36).
Walker662 in view of Bloomfield and Chall discloses does not disclose, but Walker999 discloses
latch of the compute component (eg., [0038] The compute buffer 126 may include one or more CBbytes 130, which may refer to a storage unit for each byte of information in the compute buffer 126. For example, the CBbyte 130 may be referred to as a CBbyte block, which may include a row or a chain of flops or latches,).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the memory device with internal processor as disclosed by Walker662, with Bloomfield, and Chall with Walker999, providing the benefit of steps of writing, reading, buffering, executing instructions, and storing results may occur substantially simultaneously on different instructions, or different parts of an instruction. This parallel processing, referred to as "pipelining," may improve processing performance in the electronic system (see Walker999, 0006).
Claim 9 is rejected for reasons similar to Claim 2 above.
Claim 10 is rejected for reasons similar to Claim 3 above.
Claim 11 is rejected for reasons similar to Claim 4 above.
Claim 12 is rejected for reasons similar to Claim 5 above.
Claim 16 is rejected for reasons similar to Claim 2 above.
Claim 17 is rejected for reasons similar to Claim 3 above.
Claim 18 is rejected for reasons similar to Claim 4 above.
Claim 19 is rejected for reasons similar to Claim 5 above.
Response to Arguments
Applicant's arguments filed 1/29/2026 have been fully considered but they are not persuasive.
For claims 1, 8 and 15, Applicant argues that that the cited references do not disclose the amended limitations. The Office disagrees.
In the present OA, the updated combination of references render the amended limitations as obvious.
Specifically, Walker662 in view of Bloomfield discloses does not disclose, but Chall discloses
direct (eg., 0060 Fig. 5 - LPM 550 is coupled to the CCP 540 ; 0068 Fig. 5 - CCP 540 of a device can read and write from the memory of the device, including erasing EEPROM of the device. In another embodiment, the CCP 540 can directly address the memory of neighboring devices… LPM 550 or bulk memory can only be accessed by the local CCP 540).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the memory device with internal processor as disclosed by Walker662, with Bloomfield, providing the benefit of addressing the problem in the art of performance limitation caused by separating the processor and the memory on a motherboard is referred to as the von Neumann bottleneck (see Chall, 0007) delays in the processing of data caused by the respective data buses on the various motherboards are compounded, as processing service requests between processors are subject to delays on each motherboard that a service request is processed (col 0008) determining whether or not the service request can be processed locally (col 0024).
Applicant’s arguments for dependent claims are based on their respective base independent claims 1, 8, 15, which are addressed above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GAUTAM SAIN whose telephone number is (571)270-3555. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared Rutz can be reached at 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GAUTAM SAIN/Primary Examiner, Art Unit 2135