Prosecution Insights
Last updated: April 19, 2026
Application No. 17/871,539

NEURAL NETWORK COMPUTING DEVICE AND COMPUTING METHOD THEREOF

Non-Final OA §102§103
Filed
Jul 22, 2022
Examiner
SANDIFER, MATTHEW D
Art Unit
2151
Tech Center
2100 — Computer Architecture & Software
Assignee
UPBEAT TECHNOLOGY Co., Ltd.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
512 granted / 639 resolved
+25.1% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
10 currently pending
Career history
649
Total Applications
across all art units

Statute-Specific Performance

§101
24.8%
-15.2% vs TC avg
§103
28.5%
-11.5% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
23.0%
-17.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 639 resolved cases

Office Action

§102 §103
DETAILED ACTION The instant application having Application No. 17/871,539 filed on 7/22/2022 is presented for examination by the examiner. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 7, 9-12 and 18-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tran et al. (US 2021/0209458). As per Claim 1, Tran discloses a computing device, comprising: a flash memory array, for performing a matrix multiplying-and-accumulating computation (Abstract and Figures 9, 12 and Paragraphs 0019-0024, 0074-0075 and 0124-0125, a flash memory array is implemented as a vector-by-matrix multiplication (VMM) array in an artificial neural network); the flash memory array comprising: a plurality of word lines, a plurality of bit lines and a plurality of flash memory cells, being arranged in an array and respectively connected to the word lines and the bit lines, for receiving a plurality of input voltages via the word lines and outputting a plurality of output currents via the bit lines, the output currents of the flash memory cells connected to the same bit line of the bit lines are accumulated to obtain a total output current, wherein, each of the flash memory cells stores a weight value respectively, and each of the flash memory cells is operated with one of the input voltages and the weight value to obtain one of the output currents (Figure 12 and Paragraphs 0088-0089, 0097-0098, 0113-0118 and 0124-0125, weights are stored in the flash memory cells of the VMM array, the VMM array comprises an array of memory cells coupled to word lines W0-W3 and bit lines BL0-BLN, wherein input voltages on word lines W0-W3 are multiplied by the weights stored in the memory array to produce output currents on bit lines BL0-BLN, wherein the current on each bit line is the summed current from all memory cells connected to that particular bit line); each of the flash memory cells is an analog element, and each of the input voltages, each of the output currents and each of the weight values is an analog value (Paragraphs 0088-0089 and 0101, cell storage is analog and is continuous/analog programmed, and input voltage and output current are analog levels). As per Claim 2, Tran discloses the computing device of claim 1, wherein the flash memory cells operate in a triode region (Paragraphs 0116-0118, memory cells can operate in the linear region, i.e. the triode region). As per Claim 3, Tran discloses the computing device of claim 1, wherein each of the flash memory cells comprises a transistor, a gate of the transistor is connected to a corresponding one of the word lines to apply a gate voltage, and the gate voltage corresponds to the input voltage received by the word line, and a drain of the transistor is connected to a corresponding one of the bit lines to output a drain current, and the drain current corresponds to the output current outputted by the bit line (Figures 2, 12 and Paragraphs 0074-0075 and 0124-0125, each flash memory cell comprises a source, drain, and floating gate, wherein the gate(s) of the transistors in Figure 12 are coupled to the word lines for receiving input voltage(s), and the drains (i.e. not the source lines) of the transistors are coupled to the bit lines for outputting the output current; see also Figure 2 of the attached reference by Yeh (US 5,029,130) which is incorporated by reference and discloses the type of flash memory shown in Figures 2 and 12 of Tran, wherein word lines are coupled to the gate(s) and bit lines are coupled to the drain(s) of the transistors). As per Claim 7, Tran discloses the computing device of claim 1, further comprising a plurality of digital-to-analog converters, respectively connected to the word lines and performing digital-to-analog conversions on a plurality of digital input signals to obtain the input voltages received by the word lines (Paragraphs 0101-0102 and 0165, input voltages to the word lines require digital-to-analog conversion circuits at the input(s)). As per Claim 9, Tran discloses the computing device of claim 1, further comprising a plurality of analog-to-digital converters, respectively connected to the bit lines, and performing analog-to-digital conversion on the total output currents accumulated by the bit lines to obtain a plurality of digital output signals (Figure 49A-B and Paragraphs 0161, 0166, 0216-0218 and 0220, analog-to-digital conversion circuits at the output of the bit lines, for example). As per Claim 10, Tran discloses an computing method, for performing a matrix multiplying-and-accumulating computation by a flash memory array (Abstract and Figures 9, 12 and Paragraphs 0019-0024, 0074-0075 and 0124-0125, a flash memory array implements a vector-by-matrix multiplication (VMM) in an artificial neural network); the flash memory array comprises a plurality of word lines, a plurality of bit lines and a plurality of flash memory cells, the flash memory cells are respectively connected to the word lines and the bit lines, and the computing method comprising: respectively storing a weight value in each of the flash memory cells; receiving a plurality of input voltages via the word lines; performing an computation on one of the input voltages and the weight value by each of the flash memory cells to obtain an output current; outputting the output currents of the flash memory cells via the bit lines; and accumulating the output currents of the flash memory cells connected to the same bit line of the bit lines to obtain a total output current (Figure 12 and Paragraphs 0088-0089, 0097-0098, 0113-0118 and 0124-0125, weights are stored in the flash memory cells of the VMM array, the VMM array comprises an array of memory cells coupled to word lines W0-W3 and bit lines BL0-BLN, wherein input voltages on word lines W0-W3 are multiplied by the weights stored in the memory array to produce output currents on bit lines BL0-BLN, wherein the current on each bit line is the summed current from all memory cells connected to that particular bit line); wherein, each of the flash memory cells is an analog device, and each of the input voltages, each of the output currents and each of the weight values are analog values (Paragraphs 0088-0089 and 0101, cell storage is analog and is continuous/analog programmed, and input voltage and output current are analog levels). As per Claim 11, Tran discloses the computing method of claim 10 further comprises: forming an input vector with the input voltages received by the word lines; forming an output vector with the total output currents obtained by accumulations on the bit lines; and forming a weight matrix with the weight values stored in the flash memory cells, wherein, the output vector is a matrix product of the input vector and the weight matrix (Abstract and Figures 9-10 and Paragraphs 0097-0098 and 0101, the VMM array performs vector-by-matrix multiplication wherein e.g. word line decoder 34 forms input vector I_ANAIN[0:N] of input voltages, the weight matrix is stored in the flash memory array, and the multiplication of the input vector and weight matrix generates output vector I_ARYO[0:K] and/or I_ARYOD[0:K] of output currents). As per Claim 12, Tran discloses the computing method of claim 10, wherein each of the flash memory cells comprises a transistor, a gate of the transistor is connected to a corresponding one of the word lines and a drain of the transistor is connected to a corresponding one of the bit lines, the computing method further comprises: applying a gate voltage to the gate of the transistor via the corresponding one of the word lines, and the gate voltage corresponds to the input voltage received by the word line; and outputting a drain current from the drain of the transistor via the corresponding one of the bit lines, and the drain current corresponds to the output current outputted by the bit line (Figures 2, 12 and Paragraphs 0074-0075 and 0124-0125, each flash memory cell comprises a source, drain, and floating gate, wherein the gate(s) of the transistors in Figure 12 are coupled to the word lines for receiving input voltage(s), and the drains (i.e. not the source lines) of the transistors are coupled to the bit lines for outputting the output current; see also Figure 2 of the attached reference by Yeh (US 5,029,130) which is incorporated by reference and discloses the type of flash memory shown in Figures 2 and 12 of Tran, wherein word lines are coupled to the gate(s) and bit lines are coupled to the drain(s) of the transistors). As per Claim 18, Tran discloses the computing method of claim 11, wherein before the step of receiving the input voltages via the word lines, the computing method further comprising: receiving a plurality of digital input signals; and performing digital-to-analog conversions on the digital input signals to obtain the input voltages corresponding to the word lines (Paragraphs 0101-0102 and 0165, input voltages to the word lines require digital-to-analog conversion circuits at the input(s)). As per Claim 19, Tran discloses the computing method of claim 11, wherein after the step of accumulating the output currents to obtain the total output current, the computing method further comprises: performing analog-to-digital conversions on the total output currents to obtain a plurality of digital output signals; and outputting the digital output signals (Figure 49A-B and Paragraphs 0161, 0166, 0216-0218 and 0220, analog-to-digital conversion circuits at the output of the bit lines, for example). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4-6 and 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Tran in view of Kashmiri et al. (US 2022/0027130). As per Claim 4, Tran does not explicitly disclose the computing device of claim 3, wherein the transistor has an equivalent conductance value, and the equivalent conductance value corresponds to the weight value stored in the flash memory cell. However, Kashmiri discloses implementing a vector-by-matrix multiplication with a flash memory array operating in the triode/linear region, wherein the transistor has an equivalent conductance value, and the equivalent conductance value corresponds to the weight value stored in the flash memory cell (Figures 46a, 46c and Paragraphs 0216-0217, weights are stored in the threshold voltage of the memory cell transistor(s), wherein the transistor’s channel conductance Gij is determined by Gij=β(Vgs−VTH,ij), where Vgs is the transistor gate-source voltage, β is the transistor parameter proportional to the aspect ratio of its dimensions (width/length), charge carrier mobility, etc. and VTH,ij is the programmed threshold voltage through the floating or magnetic gate, which eventually controls the weight conductance Gij). It would have been obvious to one of ordinary skill in the art prior to the effective filing of the claimed invention to combine the VMM flash memory array taught by Kashmiri with the VMM flash memory architectures of Tran because the time-domain interface circuits for such analog multiply-add networks have much smaller area footprint and consume less power than conventional amplitude-domain interfaces (Kashmiri, Paragraph 0057). As per Claim 5, Tran does not disclose the computing device of claim 4, wherein the transistor has a threshold voltage, and the equivalent conductance value is related to the threshold voltage. However, Kashmiri teaches the transistor has a threshold voltage, and the equivalent conductance value is related to the threshold voltage (Paragraphs 0216-0217, weights are stored in the threshold voltage, wherein the transistor’s channel conductance Gij is determined by Gij=β(Vgs−VTH,ij), where VTH,ij is the programmed threshold voltage through the floating or magnetic gate, which eventually controls the weight conductance Gij). It would have been obvious to one of ordinary skill in the art prior to the effective filing of the claimed invention to combine the VMM flash memory array taught by Kashmiri with the VMM flash memory architectures of Tran because the time-domain interface circuits for such analog multiply-add networks have much smaller area footprint and consume less power than conventional amplitude-domain interfaces (Kashmiri, Paragraph 0057). As per Claim 6, Tran does not explicitly disclose the computing device of claim 5, wherein the transistor is a floating gate transistor and the threshold voltage is adjustable, and the weight value stored in the flash memory cell changes according to the threshold voltage. However, Kashmiri teaches the transistor is a floating gate transistor and the threshold voltage is adjustable, and the weight value stored in the flash memory cell changes according to the threshold voltage (Paragraphs 0216-0217, weights are stored in the threshold voltage, wherein the transistor’s channel conductance Gij is determined by Gij=β(Vgs−VTH,ij), where VTH,ij is the programmed threshold voltage through the floating or magnetic gate, which eventually controls the weight conductance Gij). It would have been obvious to one of ordinary skill in the art prior to the effective filing of the claimed invention to combine the VMM flash memory array taught by Kashmiri with the VMM flash memory architectures of Tran because the time-domain interface circuits for such analog multiply-add networks have much smaller area footprint and consume less power than conventional amplitude-domain interfaces (Kashmiri, Paragraph 0057). As per Claim 13, Tran does not explicitly disclose the computing method of claim 12, wherein the transistor has an equivalent conductance value, and the equivalent conductance value corresponds to the weight value stored in the flash memory cell. However, Kashmiri discloses implementing a vector-by-matrix multiplication with a flash memory array operating in the triode/linear region, wherein the transistor has an equivalent conductance value, and the equivalent conductance value corresponds to the weight value stored in the flash memory cell (Figures 46a, 46c and Paragraphs 0216-0217, weights are stored in the threshold voltage of the memory cell transistor(s), wherein the transistor’s channel conductance Gij is determined by Gij=β(Vgs−VTH,ij), where Vgs is the transistor gate-source voltage, β is the transistor parameter proportional to the aspect ratio of its dimensions (width/length), charge carrier mobility, etc. and VTH,ij is the programmed threshold voltage through the floating or magnetic gate, which eventually controls the weight conductance Gij). It would have been obvious to one of ordinary skill in the art prior to the effective filing of the claimed invention to combine the VMM flash memory array taught by Kashmiri with the VMM flash memory architectures of Tran because the time-domain interface circuits for such analog multiply-add networks have much smaller area footprint and consume less power than conventional amplitude-domain interfaces (Kashmiri, Paragraph 0057). As per Claim 14, Tran discloses the computing method of claim 13, wherein each of the weight values is a multi-level weight value, and the multi-level weight value has at least 4 levels (Paragraph 0089, each memory cell can store one of many discrete values, e.g. 16 or 64 different values). Moreover, Kashmiri additionally discloses each of the weight values is a multi-level weight value, and the multi-level weight value has at least 4 levels (Paragraph 0216 and 0220, floating gate flash memory provides multi-level weight storage, e.g. 4 conductance levels). It would have been obvious to one of ordinary skill in the art prior to the effective filing of the claimed invention to combine the VMM flash memory array taught by Kashmiri with the VMM flash memory architectures of Tran because the time-domain interface circuits for such analog multiply-add networks have much smaller area footprint and consume less power than conventional amplitude-domain interfaces (Kashmiri, Paragraph 0057). As per Claim 15, Tran does not disclose the computing method of claim 14, wherein the transistor has a threshold voltage, and the equivalent conductance value is related to the threshold voltage. However, Kashmiri teaches the transistor has a threshold voltage, and the equivalent conductance value is related to the threshold voltage (Paragraphs 0216-0217, weights are stored in the threshold voltage, wherein the transistor’s channel conductance Gij is determined by Gij=β(Vgs−VTH,ij), where VTH,ij is the programmed threshold voltage through the floating or magnetic gate, which eventually controls the weight conductance Gij). It would have been obvious to one of ordinary skill in the art prior to the effective filing of the claimed invention to combine the VMM flash memory array taught by Kashmiri with the VMM flash memory architectures of Tran because the time-domain interface circuits for such analog multiply-add networks have much smaller area footprint and consume less power than conventional amplitude-domain interfaces (Kashmiri, Paragraph 0057). As per Claim 16, Tran does not disclose the computing method of claim 15, wherein the transistor is a floating gate transistor and the threshold voltage is adjustable, and the computing method further comprises: adjusting the threshold voltage to change the weight value stored in the flash memory cell. However, Kashmiri teaches the transistor is a floating gate transistor and the threshold voltage is adjustable, and the weight value stored in the flash memory cell changes according to the threshold voltage (Paragraphs 0216-0217, weights are stored in the threshold voltage, wherein the transistor’s channel conductance Gij is determined by Gij=β(Vgs−VTH,ij), where VTH,ij is the programmed threshold voltage through the floating or magnetic gate, which eventually controls the weight conductance Gij). It would have been obvious to one of ordinary skill in the art prior to the effective filing of the claimed invention to combine the VMM flash memory array taught by Kashmiri with the VMM flash memory architectures of Tran because the time-domain interface circuits for such analog multiply-add networks have much smaller area footprint and consume less power than conventional amplitude-domain interfaces (Kashmiri, Paragraph 0057). Allowable Subject Matter Claims 8, 17, and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Kang et al. (US 11,309,026) – discloses performing a convolution with a NOR flash storage array by applying input voltages to rows (i.e. word lines) of memory cells of the flash array which stores a convolution kernel matrix, and collecting output currents from the columns (i.e. bit lines) to obtain the convolution operation result Hung et al. (US 11,132,176) – similarly discloses an in-memory multiply and accumulate circuit based on a NOR flash array, in which input voltages are applied to word lines of the flash array, and output currents from each bit line are summed to complete a sum-of-products function Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW SANDIFER whose telephone number is (571)270-5175. The examiner can normally be reached Mon-Fri 9:30am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at (571) 272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW D SANDIFER/Primary Examiner, Art Unit 2151
Read full office action

Prosecution Timeline

Jul 22, 2022
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596530
SYSTOLIC PARALLEL GALOIS HASH COMPUTING DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12585940
LEARNING STATIC BOUND MANAGEMENT PARAMETERS FOR ANALOG RESISTIVE PROCESSING UNIT SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12585727
BIT MATRIX MULTIPLICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12578925
DYNAMIC ALGORITHM SELECTION
2y 5m to grant Granted Mar 17, 2026
Patent 12561395
LOW LATENCY MATRIX MULTIPLY UNIT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+25.2%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 639 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month