Prosecution Insights
Last updated: April 19, 2026
Application No. 18/882,332

FHE CHIP AND COMPUTING DEVICE

Final Rejection §103
Filed
Sep 11, 2024
Examiner
TIV, BACKHEAN
Art Unit
2459
Tech Center
2400 — Computer Networks
Assignee
Alipay (Hangzhou) Information Technology Co., Ltd.
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
670 granted / 891 resolved
+17.2% vs TC avg
Strong +20% interview lift
Without
With
+20.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
18 currently pending
Career history
909
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 891 resolved cases

Office Action

§103
Detailed Action Claims 1-18 are pending in this application. This is a response to the Amendments/Remarks filed on 2/13/26. This is a Final Rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 6, 8-12, 14, 15, 17, 18 rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0269067 issued to Son et al.(Son) in view of FPGA Acceleration of Number Theoretic Transform by Ye et al., Dept.of Computer Science USC, Oct 4-6 2021(cited in IDS on 7/22/25) in view of US 6,175,566 issued to Hahn et al.(Hahn). As per claims 1,10, Son teaches a fully homomorphic encryption (FHE) chip([0093] The main processor 1100 may include one or more CPU cores 1110. In addition, the main processor 1100 may further include a controller 1120 for controlling the memories 1200a and 1200b or the storage devices 1300a and 1300b. In some example embodiments, the main processor 1100 may further include an accelerator 1130, a dedicated circuit for high-speed data operation such as artificial intelligence (AI) data operation or the like. The accelerator 1130 may include a graphics processing unit (GPU), a neural processing unit (NPU), or a data processing unit (DPU). The accelerator 1130 may be implemented as the homomorphic encryption operation accelerator described with reference to FIGS. 1 to 11. The accelerator 1130 may be implemented as a chip physically independent from the other components of the main processor 1100), wherein the FHE chip comprises a multistage interconnection network (MIN) and n processor elements (PEs), and n is an integer greater than 1([0073] FIG. 8 is a diagram illustrating a homomorphic encryption operation accelerator according to some example embodiments. Referring to FIG. 8, the homomorphic encryption operation accelerator 90 may include a plurality of processing units PEs, a broadcast unit BrU, a plurality of horizontal crossbars xbar.sub.h, and a plurality of vertical crossbars xbar.sub.v.); the n PEs are configured to execute n operation tasks that belong to a ciphertext operation in parallel in a process of performing the ciphertext operation on target ciphertext by the FHE chip, wherein the target ciphertext is obtained by processing raw data based on an FHE algorithm(Abstract, [0005] According to an example embodiment, there is provided a method of operating a homomorphic encryption operation accelerator, the method including performing a number theoretic transform (NTT) operation on each of first homomorphic ciphertext and second homomorphic ciphertext, and performing a base conversion operation by adding a partial sum using a first value of the NTT operation [0030] The homomorphic encryption device 11 may be implemented to convert plaintext into ciphertext or ciphertext into plaintext using a homomorphic encryption algorithm. In some example embodiments, the homomorphic encryption device 11 may be a user device. For example, the user device may be various electronic devices…; [0032] The homomorphic encryption operation accelerator 12-1 may be implemented to efficiently parallelize a number theoretic transform (NTT) operation and a base conversion (BaseConv) operation, which occupy most of time related to a homomorphic encryption operation. Here, the NTT operation may refer to transformation of data to simplify complexity of polynomial multiplication of a homomorphic ciphertext.. [0046] In general, single instruction multiple data (SIMD) may be one of schemes for parallel processing of operations. The SIMD may perform simultaneous operation on multiple data with one instruction. In the same manner as a multiplication process generating values that are accumulated in a process of multiplication and accumulation, the operations may be performed independently of each other. The SIMD may be a parallel processing scheme that is frequently used when the same operation is performed regardless of data being computed. For the SIMD, operation accelerators may need to be arranged in parallel. In this case, each of the operation accelerators may be referred to as a SIMD lane.[0047]-[0050]… However Son does not explicitly teach the MIN is configured to support a first PE in transmitting switching data to a second PE point-to-point, wherein the switching data belongs to an operation result generated by the first PE by executing an operation task, and the first PE and the second PE belong to the n PEs, however is fairly suggested by Son, Fig.8 Ye teaches the MIN is configured to support a first PE in transmitting switching data to a second PE, wherein the switching data belongs to an operation result generated by the first PE by executing an operation task, and the first PE and the second PE belong to the n PEs. (Figure 3,4 and Section 4.2-4.3.: Parallel input data is required to be permuted before being processed by the subsequent NIT cores, since each computation stage has a different stride (S). (...) Due to the fully unrolled design, each stage has a fixed permutation pattern and does not require dynamic reconfiguration. Our architecture employs two types of permutation modules, as shown in Fig. 3.; Figure 4 shows the microarchitecture of the spatial and temporal sub-networks. As shown in Figure 4(a), spatial permutation network is implemented using Benes network [7]. A Benes network is a multi-stage routing network, with the _rst and last stage each has p=2 2×2 switches. In the middle, there are two p=2 × p=2 sub-networks, and each can be decomposed into the three-stage Benes network recursively. Compared to a naive crossbar interconnect, which requires O(p2) connections, each spatial permutation network has (p=2) · log p 2×2 switches. Thus, streaming permutation network in our design asymptotically has lower complexity. Moreover, wiring length in the network does not change with permutation stride [9]. Each 2×2 switch has one control bit to route inputs to the upper or lower sub-networks respectively.) Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify Son’s teaching of homomorphic encryption operation accelerator to include the teaching of Ye of sending operation result generated from a first PE to a second PE in the MIN in order to provide the predictable result of homomorphic encryption operation accelerator by sending results from a first PE to a second PE in the MIN. One ordinary skill in the art would have been motivated to combine the teachings in order to reduce homomorphic encryption operation time and to achieve low latency and high throughput(Ye, Introduction pg. 1-2). Son in view of Ye does not explicitly teach point to point data transmission, however Ye does teach Section 4.3, the permutation network uses a Benes network. Hahn explicitly teaches the well known point to point data transfer, col.1, lines 52-53. Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify Son in view of Ye’s teaching of homomorphic encryption operation accelerator by sending results from a first PE to a second PE in the MIN to apply the use of point to point data transfer as taught by Hahn in order to provide the predictable result of homomorphic encryption operation accelerator by sending results point to point from a first PE to a second PE in the MIN. One ordinary skill in the art would have been motivated to combine the teachings in order to provide low expandability and sending data directly from one location to another. As per claims 2,11, Son in view of Ye in view of Hahn teaches the FHE chip/computing device according to claim 1,10, wherein the MIN comprises m*n switching units classified into m stages, the n PEs are connected to n switching units belonging to a first stage in the MIN, and are connected in a one-to-one correspondence to n switching units belonging to a second stage in the MIN, the first stage and the second stage are mutually a highest stage and a lowest stage in the m stages, and m is an integer greater than 1(Son, Fig.8, Ye, Fig.3-4; Section 4.3.: Parallel input data is required to be permuted before being processed by the subsequent NIT cores, since each computation stage has a different stride (S). (...) Due to the fully unrolled design, each stage has a fixed permutation pattern and does not require dynamic reconfiguration. Our architecture employs two types of permutation modules, as shown in Fig. 3.; Figure 4 shows the microarchitecture of the spatial and temporal sub-networks. As shown in Figure 4(a), spatial permutation network is implemented using Benes network [7]. A Benes network is a multi-stage routing network, with the _rst and last stage each has p=2 2×2 switches. In the middle, there are two p=2 × p=2 sub-networks, and each can be decomposed into the three-stage Benes network recursively. Compared to a naive crossbar interconnect, which requires O(p2) connections, each spatial permutation network has (p=2) · log p 2×2 switches. Thus, streaming permutation network in our design asymptotically has lower complexity. Moreover, wiring length in the network does not change with permutation stride [9]. Each 2×2 switch has one control bit to route inputs to the upper or lower sub-networks respectively). Motivation to combine set forth in claim 1,10. As per claims 3,12, Son in view of Ye in view of Hahn teaches the FHE chip/computing device according to claim 1,10, wherein the MIN comprises a Benes network(Son, 4.3…. As shown in Figure 4(a), spatial permutation network is implemented using Benes network [7]. A Benes network is a multi-stage routing network, with the _rst and last stage each has p=2 2×2 switches. In the middle, there are two p=2 × p=2 sub-networks, and each can be decomposed into the three-stage Benes network recursively..) . Motivation to combine set forth in claim 1,10. As per claims 5,14, Son in view of Ye in view of Hahn teaches the FHE chip/computing device according to claim 1,10, wherein each of the n PEs comprises at least two output ports and at least two input ports; and the MIN is configured to: receive a first data packet from a first output port in the first PE, wherein the first data packet comprises the switching data and a first input address of a first input port that is in the second PE and that is to be used to receive the first data packet; and transmit the first data packet to the first input port based on the first input address(Ye, Figure 4 shows the microarchitecture of the spatial and temporal sub-networks. As shown in Figure 4(a), spatial permutation network is implemented using Benes network [7]. A Benes network is a multi-stage routing network, with the first and last stage each has p/2 2×2 switches. In the middle, there are two p/2 × p/2 sub-networks, and each can be decomposed into the three-stage Benes network recursively. Compared to a naive crossbar interconnect, which requires O(p2) connections, each spatial permutation network has (p/2)·logp 2×2 switches. Thus, streaming permutation network in our design asymptotically has lower complexity. Moreover, wiring length in the network does not change with permutation stride [9]. Each 2×2 switch has one control bit to route inputs to the upper or lower sub-networks respectively. Figure 4(b) illustrates the design of temporal permutation network. It has p dual-port memory blocks and p address generation units (AGU). AGU produces the control signals and addresses to the memory block it connects to. Each AGU issues memory read and write addresses independently, thereby achieving temporal permutation across data received in different cycles. As N data points stream through the interconnect with p per cycle, they are first permuted spatially by the first spatial permutation network, then the data are written into the p memory blocks. Finally, p data points with stride Si are read out per cycle and permuted again by the second spatial permutation network. Since the architecture parameters– N and p– are fixed at run-time, the configurations for all the 2×2 switches and the AGUs can be determined offline and remain valid as long as N and p don’t change. We store this information in FPGA’s on-chip memory. More details about the routing algorithm can be found in [10].). Motivation to combine set forth in claim 1,10. As per claims 6,15, Son in view of Ye in view of Hahn teaches the FHE chip/computing device according to claim 5,14, wherein the first data packet further comprises a first output address of the first output port; and the MIN is specifically configured to transmit the first data packet to the first input port based on the first output address and the first input address(Ye, Figure 4 shows the microarchitecture of the spatial and temporal sub-networks. As shown in Figure 4(a), spatial permutation network is implemented using Benes network [7]. A Benes network is a multi-stage routing network, with the first and last stage each has p/2 2×2 switches. In the middle, there are two p/2 × p/2 sub-networks, and each can be decomposed into the three-stage Benes network recursively. Compared to a naive crossbar interconnect, which requires O(p2) connections, each spatial permutation network has (p/2)·logp 2×2 switches. Thus, streaming permutation network in our design asymptotically has lower complexity. Moreover, wiring length in the network does not change with permutation stride [9]. Each 2×2 switch has one control bit to route inputs to the upper or lower sub-networks respectively. Figure 4(b) illustrates the design of temporal permutation network. It has p dual-port memory blocks and p address generation units (AGU). AGU produces the control signals and addresses to the memory block it connects to. Each AGU issues memory read and write addresses independently, thereby achieving temporal permutation across data received in different cycles. As N data points stream through the interconnect with p per cycle, they are first permuted spatially by the first spatial permutation network, then the data are written into the p memory blocks. Finally, p data points with stride Si are read out per cycle and permuted again by the second spatial permutation network. Since the architecture parameters– N and p– are fixed at run-time, the configurations for all the 2×2 switches and the AGUs can be determined offline and remain valid as long as N and p don’t change. We store this information in FPGA’s on-chip memory. More details about the routing algorithm can be found in [10].). Motivation to combine set forth in claim 1,10. As per claims 8, 17, Son in view of Ye in view of Hahn teaches the FHE chip/computing device according to claim 1,10, wherein the PE comprises an automorphism address generation unit (Auto AGU) configured to execute an automorphism operation task(Son, Figure 4(b) illustrates the design of temporal permutation network. It has p dual-port memory blocks and p address generation units (AGU). AGU produces the control signals and addresses to the memory block it connects to. Each AGU issues memory read and write addresses independently, thereby achieving temporal permutation across data received in different cycles.). Motivation to combine set forth in claim 1,10. As per claims 9,18, Son in view of Ye in view of Hahn teaches the FHE chip/computing device according to claim 1,10, wherein the PE comprises an arithmetic and logic unit (ALU) configured to execute an arithmetic operation task and/or a logic operation task(Son, [0108] Any of the elements and/or functional blocks disclosed above may include or be implemented in processing circuitry such as hardware including logic circuits; a hardware/software combination such as a processor executing software; or a combination thereof. For example, the controller 1120, accelerator 1130, CTRL 1310a, 1310b, control logic 150 DRAM controller 5500, and homomorphic operation accelerator 5200 may be implemented as the processing circuitry. The processing circuitry may more specifically include, but is not limited to, a central processing unit (CPU), an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, application-specific integrated circuit (ASIC), etc. The processing circuitry may include electrical components such as at least one of transistors, resistors, capacitors, etc. The processing circuitry may include electrical components such as logic gates including at least one of AND gates, OR gates, NAND gates, NOT gates, etc.). Claims 4, 13, rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0269067 issued to Son et al.(Son) in view of FPGA Acceleration of Number Theoretic Transform by Ye et al., Dept.of Computer Science USC, Oct 4-6 2021(cited in IDS on 7/22/25) in view of US 6,175,566 issued to Hahn et al.(Hahn) in view of US 2019/0156201 issued to Bichler et al.(Bichler). As per claims 4,13, Son in view of Ye in view of Hahn teaches the FHE chip/computing device according to claim 1,10, however does not explicitly teach wherein the MIN comprises at least one of the following networks: an Omega network, a baseline network, and a butterfly network. Bichler explicitly teaches the well known MIN comprises at least one of the following networks: an Omega network, a baseline network, and a butterfly network([0009] ..switching network are structures for distribution n of data and for parallel communication. Multistage interconnection networks(MINs), such as butterfly networks, omega networks, baseline networks….). Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify Son in view of Ye in view of Hahn of homomorphic encryption operation accelerator using the MIN to include the teaching of Bichler of the well known MIN such as butterfly network, omega networks, or baseline networks in order to provide the predictable result of the MIN being a butterfly network, omega network, or baseline network. One ordinary skill in the art would have been motivated to combine the teachings in order to provide compact and effective processing(Bichler, para.13). Claims 7,16, rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0269067 issued to Son et al.(Son) in view of FPGA Acceleration of Number Theoretic Transform by Ye et al., Dept.of Computer Science USC, Oct 4-6 2021(cited in IDS on 7/22/25) in view of US 6,175,566 issued to Hahn et al.(Hahn) in view of US 2023/0171084 issued to Kwon et al.(Kwon). As per claims 7,16, Son in view of Ye in view of Hahn teaches the FHE chip/computing device according to claim 1,10, teaches wherein the PE execute a number-theoretic transform (NTT) operation task(Son, Abstract, para.32,39), however does not explicitly teach a butterfly unit (BFU) which is taught by Kwon, Abstract, para.77. Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify Son in view of Ye in view of Hahn of the PE executing NTT operation task to apply the teaching of Kwon of the well known butterfly unit in order to provide the predictable result of the PE comprising butterfly units to execute NTT operation task. One ordinary skill in the art would have been motivated to combine the teachings in order to reduce time complexity and to perform faster computation. Response to Arguments Applicant’s arguments with respect to the rejections have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892. US 2024/0362343 issued to NA et al., teaches a homomorphic operation system according to an embodiment includes a homomorphic encryption device configured to output a first ciphertext data generated based on a first base, a homomorphic encryption server including a storage device storing base conversion table configured to convert ciphertext data based on the first base into a second ciphertext data based on a second base and the first ciphertext data received from the homomorphic encryption device, and a homomorphic encryption operation device configured to perform a predetermined operation using the base conversion table on the first ciphertext data to convert the first ciphertext data into the second ciphertext data based on the second base. US 2024/0396705 issued to Dimou et al. teaches a method for reducing calculation time for processing fully homomorphic encrypted (FHE) data comprises pre-calculating a second half of a key-switching key, storing the second half of the key-switching key in memory, and receiving FHE data. After the FHE data is received, the method determines a first half of the key-switching key by randomly generating a first half of the key-switching key. US 2023/0327849 issued to Mert et al., teaches generate, from a ciphertext corresponding to a polynomial having a first degree for performing a homomorphic encryption operation, split polynomials having a second degree by factorizing the polynomial, wherein the split polynomials have a second degree that is less than the first degree, generate partial operation results by performing an element-wise operation using the split polynomials, and generate a homomorphic encryption operation result corresponding to the ciphertext by joining the partial operation results. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BACKHEAN TIV whose telephone number is (571)272-5654. The examiner can normally be reached on Mon.-Thurs. 5:30-3:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TONIA DOLLINGER can be reached on (571) 272-4170. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BACKHEAN TIV/ Primary Examiner Art Unit 2459
Read full office action

Prosecution Timeline

Sep 11, 2024
Application Filed
Nov 12, 2025
Non-Final Rejection — §103
Feb 13, 2026
Response Filed
Mar 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603784
AUTHORIZATION OF STATES IN A STATEFUL SIGNATURE SCHEME
2y 5m to grant Granted Apr 14, 2026
Patent 12603873
DYNAMIC ONE-TIME USE KNOWLEDGE-BASED AUTHENTICATION VIA MULTI-SOURCED PRIVATE DATA USING ARTIFICIAL INTELLIGENCE TECHNIQUES
2y 5m to grant Granted Apr 14, 2026
Patent 12585793
SYSTEM AND METHOD CONFIGURED TO COMMISSION AND DECOMMISSION ENDPOINT DEVICES USING STEGANOGRAPHY
2y 5m to grant Granted Mar 24, 2026
Patent 12585734
3-D PROSTHETIC OR OBJECT MODEL FILE SECURE ENCAPSULATION IN A NON-DISTRIBUTABLE IMAGE RENDERING FILE FORMAT
2y 5m to grant Granted Mar 24, 2026
Patent 12587566
Detecting Suspicious Entities
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
96%
With Interview (+20.5%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 891 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month