Prosecution Insights
Last updated: April 19, 2026
Application No. 18/204,069

SYSTEMS, APPARATUSES, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR MACHINE LEARNING WITH A LONG SHORT-TERM MEMORY ACCELERATOR

Non-Final OA §102§103§112
Filed
May 31, 2023
Examiner
COULSON, JESSE CHEN
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
STMicroelectronics
OA Round
1 (Non-Final)
25%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
1 granted / 4 resolved
-30.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
30.6%
-9.4% vs TC avg
§103
29.8%
-10.2% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The action is in response to the application filed on 5/31/2023. Claims 1-20 are pending and have been examined. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/28/2023 is in compliance with the provisions of 37 CFR 1.97, 1.98, and MPEP § 609. It has been placed in the application file, and the information referred to therein has been considered as to the merits. Claim Objections Claims 9 and 19 are objected to because of the following informalities: 3 lines from the bottom, “at least one phase associated with each of the at sensor signals” should read “at least one phase associated with each of the sensor signals”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 7 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 7: Claim 7 recites the limitation "The method of claim 1". There is insufficient antecedent basis for this limitation in the claim. There is no method in Claim 1, for the purposes of examination on the merits of Claim 7, Claim 7 is being treated as reciting “The system of claim 1”. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-8 and 11-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li, (US Patent Application No. US 20220414443 A1), hereinafter “Li”. Regarding Claim 1, Li teaches: A system comprising: a long short-term memory (LSTM) accelerator(paragraph 19, “FIG. 9 depicts a mixed signal procession unit 904 configured to support LSTM processing”, Fig 9 Accelerator 902 ) comprising: a finite state machine (FSM) configured with a plurality of states comprising a machine learning algorithm (hardware sequencer is FSM, finite states are blocks controlled by hardware sequencer, Figure 6 HARDWARE SEQ 606 controls CIM FSM, CIM ARRAY, DPP, and NON-LINEAR OP blocks, paragraph 108, “The various flows and processing of aspects of MSPU 604 may be directed in whole or part by hardware sequencer 606 based on instructions stored in sequencer memory 608”); a weight memory configured to at least store a plurality of weights and a plurality of biases (weight memory is CIM array for weights and activation buffer for biases, paragraph 100, “task input data, such as machine learning model task data, which may include model data (e.g., weights, biases, and other parameters)… The task input data may be initially stored in activation buffer 628 (e.g., an L2 buffer)”, paragraph 103, “weight data may be written to, for example, the columns of CIM array 616”); one or more activation registers (paragraph 79, “Generally, nonlinear operation block 420 may be configured for operation by coefficients stored in hardware registers”); a hidden state memory (paragraph 136, “The hidden state vector h.sub.t may be provided to the activation buffer 928”); and a plurality of processing elements (paragraph 68, “accelerator 402 comprises a plurality of signal processing units (SPUs), including a mixed signal processing unit (MSPU) 404A”); at least one processor and at least one memory coupled to the processor, wherein the processor (paragraph 205, “Processing system 1700 includes a central processing unit (CPU) 1702, which in some examples may be a multi-core CPU. Instructions executed at the CPU 1702 may be loaded, for example, from a program memory associated with the CPU 1702”) is configured to: apply the machine learning algorithm of the FSM (paragraph 70, “hardware sequencer 406 that is configured to control the sequence of operations of the computational components of MSPU 404A based on instructions stored in sequencer memory”), wherein the machine learning algorithm is configured to: perform a plurality of operations with the plurality of processing elements including one or more matrix-vector multiplication operations (paragraph 104, “CIM array 616 then processes the layer input data and generates analog domain output, which is provided to digital post processing (DPP) block 618”, paragraph 131, “CIM array 916, which comprises a plurality of sub-arrays for different weight matrices (W.sub.0, W.sub.i, W.sub.f, W.sub.c)”, The vector of input data is multiplied by four weight matrices showing four matrix-vector multiplication operations”), vector-vector multiplication operations, vector-vector addition operations (Figure 9 shows 3 vectors ft, it, and ot as input to Element-Wise MAC 932 which then does vector-vector multiplication and addition, paragraph 134, “The outputs of nonlinear operation blocks 920A and 920B may be provided to element-wise multiply and accumulate (MAC) block 932 for element-wise vector-vector multiplication and addition”), and non-linear activation operations (paragraph 132, “nonlinear operation block 920A may be configured to perform a Sigmoid function and to output the forget gate activation vector, f.sub.t, input/update gate activation vector, i.sub.t, and an output gate activation vector, o.sub.t”); and wherein at least one non-linear activation operation comprises receiving at least one input and negating at least one negative input (ReLU receives input and negates negative, paragraph 30, “nonlinear operations may include… rectified linear unit (ReLU), Figure 9 920A and 920B perform nonlinear operations on an input”). Regarding Claim 2, Li teaches the system as referenced above in Claim 1. Li further teaches: wherein the weight memory comprises a look up table (CIM array and activation buffer act as look up table where CIM array stores pre-loaded weight matrices and activation buffer contains biases which are used with weights during computation, paragraph 131, “MSPU 904 includes a CIM array 916, which comprises a plurality of sub-arrays for different weight matrices (W.sub.0, W.sub.i, W.sub.f, W.sub.c)”, paragraph 100, “biases… may be initially stored in activation buffer”, paragraph 176, “add bias to the output”). Regarding Claim 3, Li teaches the system as referenced above in Claim 2. Li further teaches: wherein the look up table of the weight memory is portioned into a plurality of portions, including at least a first portion associated with a forget gate of the FSM, a second portion associated with an input gate of the FSM, a third portion associated with a cell gate of the FSM, and a fourth portion associated with an output gate of the FSM (Hardware sequencer(FSM) controls MSPU including CIM array, paragraph 131, “MSPU 904 includes a CIM array 916, which comprises a plurality of sub-arrays for different weight matrices (W.sub.0, W.sub.i, W.sub.f, W.sub.c)”, paragraph 127, “A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate”). Regarding Claim 4, Li teaches the system as referenced above in Claim 3. Li further teaches: wherein the first portion associated with a forget gate of the FSM stores a plurality of weights and a plurality of biases associated with the forget gate; wherein the second portion associated with an input gate of the FSM stores a plurality of weights and a plurality of biases associated with the input gate; wherein the third portion associated with a cell gate of the FSM stores a plurality of weights and a plurality of biases associated with the cell gate; and wherein the fourth portion associated with the output gate of the FSM stores a plurality of weights and a plurality of biases associated with the output gate (weights and biases correspond to specific portions of CIM array and activation buffer, paragraph 131, “MSPU 904 includes a CIM array 916, which comprises a plurality of sub-arrays for different weight matrices (W.sub.0, W.sub.i, W.sub.f, W.sub.c)”, paragraph 100, “biases… may be initially stored in activation buffer”, paragraph 176, “add bias to the output”). Regarding Claim 5, Li teaches the system as referenced above in Claim 3. Li further teaches: wherein the first portion associated with a forget gate of the FSM is pre-allocated, the second portion associated with an input gate of the FSM is pre-allocated, the third portion associated with a cell gate of the FSM is pre-allocated, and the fourth portion associated with the output gate of the FSM is pre-allocated (weight in CIM array is allocated before further operations by DDP block and NON-linear OP block and therefore is pre-allocated, Figure 6, paragraph 104, “CIM array 616 then processes the layer input data and generates analog domain output, which is provided to digital post processing (DPP) block 618)”. Regarding Claim 6, Li teaches the system as referenced above in Claim 1. Li further teaches: wherein at least one non-linear activation operation includes a tanh operation (paragraph 133, “nonlinear operation block 920B may be configured to perform a hyperbolic tangent function”). Regarding Claim 7, Li teaches the system as referenced above in Claim 1. Li further teaches: wherein at least one non-linear activation operation includes a sigmoid operation (paragraph 132, “nonlinear operation block 920A may be configured to perform a Sigmoid function”). Regarding Claim 8, Li teaches the system as referenced above in Claim 1. Li further teaches: wherein the one or more matrix-vector multiplication operations, vector-vector multiplication operations, vector-vector addition operations, and non-linear activation operations include: at least four matrix-vector multiplication operations (paragraph 104, “CIM array 616 then processes the layer input data and generates analog domain output, which is provided to digital post processing (DPP) block 618”, paragraph 131, “CIM array 916, which comprises a plurality of sub-arrays for different weight matrices (W.sub.0, W.sub.i, W.sub.f, W.sub.c)”, The vector of input data is multiplied by four weight matrices showing four matrix-vector multiplication operations); at least three vector-vector multiplication operations; and at least one vector-vector addition operations (Figure 9 shows 3 vectors ft, it, and ot as input to Element-Wise MAC 932 which then does vector-vector multiplication and addition, paragraph 134, “The outputs of nonlinear operation blocks 920A and 920B may be provided to element-wise multiply and accumulate (MAC) block 932 for element-wise vector-vector multiplication and addition”); and at least one non-linear activation (paragraph 132, “nonlinear operation block 920A may be configured to perform a Sigmoid function and to output the forget gate activation vector, f.sub.t, input/update gate activation vector, i.sub.t, and an output gate activation vector, o.sub.t”). Regarding Claim 11, Li teaches: A method comprising: providing a long short-term memory (LSTM) accelerator(paragraph 19, “FIG. 9 depicts a mixed signal procession unit 904 configured to support LSTM processing”, Fig 9 Accelerator 902 ) comprising: a finite state machine (FSM) configured with a plurality of states comprising a machine learning algorithm (hardware sequencer is FSM, finite states are blocks controlled by hardware sequencer, Figure 6 HARDWARE SEQ 606 controls CIM FSM, CIM ARRAY, DPP, and NON-LINEAR OP blocks, paragraph 108, “The various flows and processing of aspects of MSPU 604 may be directed in whole or part by hardware sequencer 606 based on instructions stored in sequencer memory 608”); a weight memory configured to at least store a plurality of weights and a plurality of biases (weight memory is CIM array for weights and activation buffer for biases, paragraph 100, “task input data, such as machine learning model task data, which may include model data (e.g., weights, biases, and other parameters)… The task input data may be initially stored in activation buffer 628 (e.g., an L2 buffer)”, paragraph 103, “weight data may be written to, for example, the columns of CIM array 616”); one or more activation registers (paragraph 79, “Generally, nonlinear operation block 420 may be configured for operation by coefficients stored in hardware registers”); a hidden state memory (paragraph 136, “The hidden state vector h.sub.t may be provided to the activation buffer 928”); and a plurality of processing elements (paragraph 68, “accelerator 402 comprises a plurality of signal processing units (SPUs), including a mixed signal processing unit (MSPU) 404A”); at least one processor and at least one memory coupled to the processor, wherein the processor (paragraph 205, “Processing system 1700 includes a central processing unit (CPU) 1702, which in some examples may be a multi-core CPU. Instructions executed at the CPU 1702 may be loaded, for example, from a program memory associated with the CPU 1702”) is configured to: apply the machine learning algorithm of the FSM (paragraph 70, “hardware sequencer 406 that is configured to control the sequence of operations of the computational components of MSPU 404A based on instructions stored in sequencer memory”), wherein the machine learning algorithm is configured to: perform a plurality of operations with the plurality of processing elements including one or more matrix-vector multiplication operations (paragraph 104, “CIM array 616 then processes the layer input data and generates analog domain output, which is provided to digital post processing (DPP) block 618”, paragraph 131, “CIM array 916, which comprises a plurality of sub-arrays for different weight matrices (W.sub.0, W.sub.i, W.sub.f, W.sub.c)”, The vector of input data is multiplied by four weight matrices showing four matrix-vector multiplication operations”), vector-vector multiplication operations, vector-vector addition operations (Figure 9 shows 3 vectors ft, it, and ot as input to Element-Wise MAC 932 which then does vector-vector multiplication and addition, paragraph 134, “The outputs of nonlinear operation blocks 920A and 920B may be provided to element-wise multiply and accumulate (MAC) block 932 for element-wise vector-vector multiplication and addition”), and non-linear activation operations (paragraph 132, “nonlinear operation block 920A may be configured to perform a Sigmoid function and to output the forget gate activation vector, f.sub.t, input/update gate activation vector, i.sub.t, and an output gate activation vector, o.sub.t”); and wherein at least one non-linear activation operation comprises receiving at least one input and negating at least one negative input (ReLU receives input and negates negative, paragraph 30, “nonlinear operations may include… rectified linear unit (ReLU), Figure 9 920A and 920B perform nonlinear operations on an input”). Regarding Claim 12, the rejection of Claim 11 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 2. Regarding Claim 13, the rejection of Claim 12 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 3. Regarding Claim 14, the rejection of Claim 13 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 4. Regarding Claim 15, the rejection of Claim 13 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 5. Regarding Claim 16, the rejection of Claim 11 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 6. Regarding Claim 17, the rejection of Claim 11 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 7. Regarding Claim 18, the rejection of Claim 11 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 8. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 9, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Ofodile et al, “Action Recognition Using Single-Pixel Time-of-Flight Detection”, hereinafter “Ofodile”. Regarding Claim 9, Li teaches the system as referenced above in Claim 1. Li further teaches: … with the machine learning algorithm of the FSM of the LSTM accelerator… (Li, paragraph 108, “The various flows and processing of aspects of MSPU 604 may be directed in whole or part by hardware sequencer 606 based on instructions stored in sequencer memory 608”, paragraph 19, “FIG. 9 depicts a mixed signal procession unit 904 configured to support LSTM processing”, Fig 9 Accelerator 902) Li does not expressly teach: a laser; and at least one photodetector; and wherein the processor is further configured to: transmit a sensor pulses with the laser; generate sensor signals and timestamps based on one or more reflections received by the at least one photodetector, wherein the reflections are associated with the one or more sensor pulses; generate… at least one phase associated with each of the at sensor signals and timestamps; and determine a distance to an object based on the at least one phase. However, Ofodile teaches: a laser (Ofodile, p. 5, paragraph 2, “The scene was illuminated by Fianium supecontinuum laser source”); and at least one photodetector (Ofodile, p. 5, paragraph 2, “The reflected light from the scene was collected by a Hamamatsu R10467U-06 hybrid photodetector (HPD)”); and wherein the processor is further configured to: transmit a sensor pulses with the laser (1 MHz is pulse rate, Ofodile, p. 5, paragraph 2, “Fianium supecontinuum laser source (SC400-2-PP) working at 1 MHz rate”); generate sensor signals and timestamps based on one or more reflections received by the at least one photodetector, wherein the reflections are associated with the one or more sensor pulses (Ofodile, p. 5, paragraph 2, “The scene was illuminated by Fianium supecontinuum laser source (SC400-2-PP) working at 1 MHz rate. The scatterer ensured that the whole scene Was illuminated at once, without any scanning or other moving parts required. The reflected light from the scene was collected by a Hamamatsu R10467U-06 hybrid photodetector (HPD)”); generate… at least one phase associated with each of the at sensor signals and timestamps (generated representations of temporal data for each class from applying machine learning algorithms are phases, Ofodile, p. 4, paragraph 6, “The recorded 1D time series containing temporal evolution of back scattered light (timestamped detected photon amplitudes) is enough to recognise human actions when interpreted using machine learning algorithms”, p. 10, paragraph 1, “Recurrent nets are able to model multivariate time-series—in our case, time-of-flight measurements—and output a class prediction by considering the whole temporal sequence”); and determine a distance to an object based on the at least one phase (Each action class representation is generated from temporal data which includes distance data therefore the ML processing of the representations determines a distance, Ofodile, p. 2, paragraph 4, “Information about both the distance to the object and its shape are embedded in the traces”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Li’s LSTM accelerator as the hardware implementation of an LSTM in Ofodile. The motivation to do so would be to implement an efficient rnn machine learning model for the machine learning task (Li, paragraph 29, “Aspects of the present disclosure provide compute in memory-based architectures for supporting advanced machine learning architectures… provide dynamically configurable and flexible machine learning/artificial intelligence accelerators based on compute-in-memory (CIM) processing capabilities”, paragraph 31, “implement various machine learning architectures and their related processing functions, including convolutional neural network, recurrent neural networks, recursive neural networks, long short-term memory (LSTM) and gated recurrent unit (GRU)-based neural networks… The CIM-based machine learning model accelerators described herein can perform bitwise operations extremely efficiently”). Regarding Claim 19, the rejection of Claim 11 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 9. Claims 10, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Ofodile, further in view of Kuttner et al, “Highly Sensitive Indirect Time-of-Flight Distance Sensor With Integrated Single-Photon Avalanche Diode in 0.35 μm CMOS”, hereinafter “Kuttner”. Regarding Claim 10, Li in view of Ofodile teaches the system as referenced above in Claim 9. Li in view of Ofodile does not teach, however Kuttner teaches: wherein the at least one photodetector includes at least one single-photon avalanche diode (Kuttner, p. 1, Abstract, “An integrated single-photon avalanche diode (SPAD) with an active diameter of 38 μm is used as detector”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a single-photon avalanche diode as a photodetector as does Kuttner in the invention of Ofodile. The motivation to do so would be to reduce the required amount of optical power (Kuttner, p. 1, col. 1, paragraph 1, “Using single-photon avalanche diodes (SPADs) allows to considerably reduce the required amount of optical power due to their high sensitivity”). Regarding Claim 20, the rejection of Claim 19 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 9. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSE CHEN COULSON whose telephone number is (571)272-4716. The examiner can normally be reached Monday-Friday 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JESSE C COULSON/ Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

May 31, 2023
Application Filed
Jan 23, 2026
Non-Final Rejection — §102, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
25%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month