Prosecution Insights
Last updated: April 18, 2026
Application No. 18/014,703

NEURAL NETWORK-BASED COMMUNICATION METHOD AND DEVICE

Final Rejection §102§103§112
Filed
Jan 05, 2023
Examiner
KWON, JUN
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
LG Electronics Inc.
OA Round
2 (Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
26 granted / 68 resolved
-16.8% vs TC avg
Strong +46% interview lift
Without
With
+46.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
34 currently pending
Career history
102
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§102 §103 §112
Detailed Action This Office Action is in response to the remarks entered on 03/18/2026. Claims 1-5 and 7-15 are currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-5 and 7-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “wherein, based on a difference between weights applied to an input data sequence input to the first neural network encoder being less than or equal to a specific value, the input data sequence is passed through the accumulator to make the difference between the weights greater than the specific value.” It is unclear what constitutes “a difference between weights applied to an input data sequence.” The claim fails to point out how the ‘difference between weights’ is calculated. Does it mean ‘a difference between a weight before the interleaver and after the interleaver’ (For example, Fig 37, difference = v1-v2), or a difference between a first input weight and a following input weight? For purpose of examination, the examiner interprets the limitation to mean: the difference between weights means the difference between the previous signal and the current signal. Claims 3-5 and 7-14 depend from independent claim 1. Therefore, the claims inherit the same deficiency. Claim 15 is an apparatus claim which implements the same features as the method claim 1, and is rejected for at least the same reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-5, 7, and 9-15 are rejected under 35 U.S.C. 103 as being unpatentable over Zeng et al. (US 20210273706 A1, hereinafter ‘Zeng’) in view of Sattiraju et al. (Sattiraju et al, “Performance Analysis of Deep Learning based on Recurrent Neural Networks for Channel Coding”, 2018, hereinafter ‘Sattiraju’) in view of Aragao et al. (Aragao et al, “A Mechanism to Control the Congestion in Machine-to-Machine Communication in LTE-A Networks”, 2017, hereinafter ‘Aragao’) and further in view of Yuan et al. (Yuan et al, “Combined Turbo Codes and Interleaver Design”, 1999, hereinafter ‘Yuan’). Regarding Claim 1, Zeng teaches: A method comprising: ([Zeng, 0092; Fig. 8] discloses the UE comprises CSI Encoder 804 and CSI Decoder 802, Encoder/Decoder parameters 806, and a parameter generation 805. [Zeng, 0059 and 0088] collectively discloses that the CSI encoder and the CSI decoder are implemented using neural networks) transmitting, by a user equipment (UE) including a first neural network encoder and a first neural network decoder ([Zeng, 0003-0005; 0044-0045; 0050] collectively disclose the UE transmitting CSI to the Base Station (uplink) and the Base Station communicating with the UE on the downlink) receiving, by the UE, a random access response from the base station; ([Zeng, 0064] discloses that the CSI encoder 311 is implemented by base stations 105 and UEs 115. [Zeng, 0072] discloses the payload portion may be encoded by the CSI encoder 311 and includes random access procedure messages. [Zeng, 0067-0068] discloses the first network node (base station 105) providing new CSIRS-based CSI encoder parameters (first parameter) to the second neural network node (UE 115) for use in channel compression, and the first network node sending the decoder parameter (second parameter) to the second network node) performing, by the UE, an uplink transmission to the base station ([Zeng, 0003-0005; 0044-0045; 0050] collectively disclose the UE transmitting CSI to the Base Station (uplink) and the Base Station communicating with the UE on the downlink) receiving, by the UE, information from a base station, wherein the information includes a first parameter related to the first neural network encoder and a second parameter related to the first neural network decoder; and ([Zeng, 0003-0005; 0044-0045; 0050] collectively disclose the UE transmitting CSI to the Base Station (uplink) and the Base Station communicating with the UE on the downlink. [Zeng, 0067-0068] discloses the first network node (base station 105) providing new CSIRS-based CSI encoder parameters (first parameter) to the second neural network node (UE 115) for use in channel compression, and the first network node sending the decoder parameter (second parameter) to the second network node) communicating, by the UE, with the base station based on the information, ([Zeng, 0067-0068] discloses the first network node (base station 105) providing new CSIRS-based CSI encoder parameters (neural network parameter) to the second neural network node (UE 115) for use in channel compression, and the first network node sending the decoder parameter to the second network node. [Zeng, 0062] discloses the UE encoding the CSI with a CSI encoder using neural-network based channel compression and sending the CSI to the base station. The UE also may send additional information regarding which reference signal the CSI encoded payload is based upon) wherein the UE transmits uplink data to the base station based on the first parameter, ([Zeng, 0062] discloses the UE encoding the CSI with a CSI encoder using neural-network based channel compression and sending the CSI to the base station. The UE also may send additional information regarding which reference signal the CSI encoded payload is based upon) wherein the UE receives downlink data from the base station based on the second parameter, ([Zeng, 0065-0066] The UE contains a CSI encoder and a CSI decoder where the CSI decoder receives CSI feedback from a communication link. [Zeng, 0003-0005; 0044-0045; 0050] collectively disclose the UE transmitting CSI to the Base Station (uplink) and the Base Station communicating with the UE on the downlink) However, Zeng does not specifically disclose: transmitting, by a user equipment (UE) including a first neural network encoder and a first neural network decoder, a preamble to a base station through a physical random access channel (PRACH); performing, by the UE, an uplink transmission to the base station based on an uplink grant scheduled in the random access response; receiving, by the UE, a contention resolution message from the base station; wherein the first neural network encoder includes an interleaver, a recursive systematic convolutional (RSC) code, and an accumulator, and wherein, based on a difference between weights applied to an input data sequence input to the first neural network encoder being less than or equal to a specific value, the input data sequence is passed through the accumulator to make the difference between the weights greater than the specific value. Sattiraju teaches: wherein the first neural network encoder includes an interleaver, a recursive systematic convolutional (RSC) code, and an accumulator ([Sattiraju, page 1, right col, line 31 - page 2, left col, line 2; Fig. 1] and [Sattiraju, page 2, left col, line 19-26] collectively discloses a parallel Deep Learning encoder network separated by an interleaver. The LTE uses two 8-state identical Recursive Systematic Convolutional (RSC) encoders that are concatenated in parallel. The interleaver is used to scramble the bits and to provide different input data to each neural network. [Sattiraju, page 2, right col, line 10-17] The Shift Register that stores the tail bits after encoding and padding them to the stream bits is the accumulator) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having both the teachings of Zeng and Sattiraju to use the method of generating separate input data for each encoder using an interleaver of Sattiraju to implement the machine learning based communication method of Zeng. The suggestion and/or motivation for doing so is to improve the accuracy of the machine learning based communication system by spreading burst errors into single bit errors which can be easily corrected through error correction codes. However, Zeng in view of Sattiraju does not specifically disclose: transmitting, by a user equipment (UE) including a first neural network encoder and a first neural network decoder, a preamble to a base station through a physical random access channel (PRACH); performing, by the UE, an uplink transmission to the base station based on an uplink grant scheduled in the random access response; receiving, by the UE, a contention resolution message from the base station; wherein, based on a difference between weights applied to an input data sequence input to the first neural network encoder being less than or equal to a specific value, the input data sequence is passed through the accumulator to make the difference between the weights greater than the specific value. Aragao teaches: transmitting, ([Aragao, left col, A. Random-Access Channel Procedure, line 1-16] (Msg1) discloses transmitting a preamble code to eNodeB (the base station) on the PRACH) performing, by the UE, an uplink transmission to the base station based on an uplink grant scheduled in the random access response; ([Aragao, left col, A. Random-Access Channel Procedure, line 1-16] Random Access Response – RAR (Msg2) and (Msg3) disclose eNodeB receiving the access request, assigns a identifier to devices and grants resources on the uplink channel for subsequent messages exchanges (i.e., uplink grant scheduled in the random access response), and the device (UE) send the identifier assigned in the previous message) receiving, by the UE, a contention resolution message from the base station; ([Aragao, left col, A. Random-Access Channel Procedure, line 1-16] (Msg4) discloses waiting for and receiving the contention resolution message from the eNodeB (the base station). If the device identifier is present in the contention message, an ACK message is sent to the eNodeB) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having both the teachings of Zeng, Sattiraju and Aragao to use the method of transmitting a preamble to the base station, performing an uplink transmission to the base station, and receiving a contention resolution message from the base station of Aragao to implement the machine learning based communication method of Zeng. The suggestion and/or motivation for doing so is to improve the accuracy and security of the communication system by confirming that a correct UE has been identified after transmitting signal. However, Zeng in view of Sattiraju and further in view of Aragao does not specifically disclose: wherein, based on a difference between weights applied to an input data sequence input to the first neural network encoder being less than or equal to a specific value, the input data sequence is passed through the accumulator to make the difference between the weights greater than the specific value. Yuan teaches: wherein, based on a difference between weights applied to an input data sequence input to the first value. ([Yuan, page 486, left col, line 7-45] discloses calculating differences between selected integers and previously selected integers, and when the difference is bigger than the minimum interleaver distance S, the current integers are saved and the process is repeated until the N numbers of integers are selected. The difference in the selected integers are bigger as the integers with bigger differences compared to the specific value S were selected) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having both the teachings of Zeng, Sattiraju, Aragao and Yuan to use the method of accumulating selected values based on the specific value of Yuan to implement the machine learning based communication method of Zeng. The suggestion and/or motivation for doing so is to improve the accuracy of the communication system by reducing the number of low weight error patterns [Yuan, page 486, line 7-13]. Regarding Claim 3, Zeng teaches: The method of claim 1, wherein the information informs of at least one of a type of a neural network, a number of layers of the neural network, an activation function for each of the layers, an optimization method for the neural network, or a weight for each of the layers. ([Zeng, 0003-0005; 0044-0045; 0050] collectively disclose the UE transmitting CSI to the Base Station (uplink) and the Base Station communicating with the UE on the downlink. [Zeng, 0067-0068] discloses the first network node (base station 105) providing new CSIRS-based CSI encoder parameters (first parameter) to the second neural network node (UE 115) for use in channel compression, and the first network node sending the decoder parameter (second parameter) to the second network node) Regarding Claim 4, Zeng teaches: The method of claim 3, wherein the weight is defined in advance. ([Zeng, 0003-0005; 0044-0045; 0050] collectively disclose the UE transmitting CSI to the Base Station (uplink) and the Base Station communicating with the UE on the downlink. [Zeng, 0067-0068] discloses the first network node (base station 105) providing new (defined in advance in the first network node) CSIRS-based CSI encoder parameters (first parameter) to the second neural network node (UE 115) for use in channel compression, and the first network node sending the decoder parameter (second parameter) to the second network node) Regarding Claim 5, Zeng teaches: The method of claim 1, wherein the base station includes a second neural network encoder and a second neural network decoder composed of a neural network. ([Zeng, 0091-0092; Fig. 7; Fig. 8] discloses the Base Station 105 comprising CSI Encoder 704 (second neural network encoder), CSI decoder 702 (second neural network decoder), Encoder/Decoder parameter generation 705, Encoder/Decoder parameters 706, and the UE 115 including CSI Encoder 804 and CSI Decoder 802, Encoder/Decoder parameters 806, and a parameter generation 805. [Zeng, 0059 and 0088] collectively discloses that the CSI encoder and the CSI decoder are implemented using neural networks) Regarding Claim 7, Zeng in view of Sattiraju teaches: The method of claim 1, wherein the first neural network encoder comprises a plurality of neural networks arranged in parallel, and wherein some of the plurality of the neural networks have different input data. ([Sattiraju, page 1, right col, line 31 - page 2, left col, line 2; Fig. 1] and [Sattiraju, page 2, left col, line 19-26] collectively discloses a parallel Deep Learning encoder network separated by an interleaver. The LTE uses two 8-state identical Recursive Systematic Convolutional encoders that are concatenated in parallel. The interleaver is used to scramble the bits and to provide different input data to each neural network) Regarding Claim 9, Zeng in view of Sattiraju teaches: The method of claim 7, wherein the different input data are generated based on an interleaver and an accumulator. ([Sattiraju, page 1, right col, line 31 - page 2, left col, line 2; Fig. 1] and [Sattiraju, page 2, left col, line 19-26] collectively discloses a parallel Deep Learning encoder network separated by an interleaver. The LTE uses two 8-state identical Recursive Systematic Convolutional encoders that are concatenated in parallel. The interleaver is used to scramble the bits and to provide different input data to each neural network. [Sattiraju, page 2, right col, line 10-17] The Shift Register that stores the tail bits after encoding and padding them to the stream bits is the accumulator) Regarding Claim 10, Zeng in view of Sattiraju teaches: The method of claim 7, wherein the different input data are generated based on an interleaver and a recursive systematic convolutional (RSC) code. ([Sattiraju, page 1, right col, line 31 - page 2, left col, line 2; Fig. 1] and [Sattiraju, page 2, left col, line 19-26] collectively discloses a parallel Deep Learning encoder network separated by an interleaver. The LTE uses two 8-state identical Recursive Systematic Convolutional encoders that are concatenated in parallel. The interleaver is used to scramble the bits and to provide different input data to each neural network) Regarding Claim 11, Zeng in view of Sattiraju teaches: The method of claim 7, wherein the different input data comprises systematic input data. ([Sattiraju, page 2, left col, II. TURBO ENCODER ARCHITECTURE AND INTERFACES, line 6-21] discloses that the input bits are the systematic bits) Regarding Claim 12, Zeng teaches: The method of claim 1, wherein the first parameter and the second parameter are generated based on training performed by the base station. ([Zeng, 0067-0068] discloses the first network node (base station 105) having RS observations which is used to train instances of CSI encoder and CSI decoder. The trained CSIRS-based encoder parameters and the CSI decoder parameters are sent to the second network node) Regarding Claim 13, Zeng teaches: The method of claim 1, wherein the first parameter and the second parameter are generated by a training device, and wherein the UE receives the first parameter and the second parameter transmitted to the base station by the training device from the base station. ([Zeng, 0067-0068] discloses the first network node (base station 105) having RS observations which is used to train instances of CSI encoder and CSI decoder. The trained CSIRS-based encoder parameters and the CSI decoder parameters are sent to the second network node) Regarding Claim 14, Zeng teaches: The method of claim 1, wherein the information comprises at least one of a transmission-related weight and a reception-related weight. ([Zeng, 0067-0068] discloses the first network node (base station 105) providing new CSIRS-based CSI encoder parameters (transmission related weight) to the second neural network node (UE 115) for use in channel compression, and the first network node sending the decoder parameter (reception related weight) to the second network node. [Zeng, 0059] The CSI encoder provides channel compression (transmission) and the CSI decoder provides channel decompression (reception) ) Regarding Claim 15, Zeng teaches: A user equipment (UE) comprising: at least one memory; at least one transceiver; and at least one processor operably connectable to the at least one memory and the at least one transceiver, wherein the at least one memory stores instructions that, based on being executed by the at least one processor, cause the at least one processor to perform operations comprising: ([Zeng, 0064] The UE and the Base Station comprises a network node transmit processor, storage memories, and/or other circuitry such as controller/processor that provides one or more functions of CSI feedback. [Zeng, 0092; Fig. 8] discloses the UE comprises CSI Encoder 804 and CSI Decoder 802, Encoder/Decoder parameters 806, and a parameter generation 805. [Zeng, 0059 and 0088] collectively discloses that the CSI encoder and the CSI decoder are implemented using neural networks) Claim 15 is an apparatus claim which implements the same features as the method claim 1, and is rejected for at least the same reasons. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Zeng in view of Sattiraju in view of Aragao in view of Yuan and further in view of Chen et al. (US 20230084164 A1, hereinafter ‘Chen’) Regarding Claim 2, Zeng in view of Sattiraju in view of Aragao and further in view of Yuan teaches: The method of claim 1. However, Zeng in view of Sattiraju in view of Aragao and further in view of Yuan does not specifically disclose: wherein the information is transmitted based on radio resource control (RRC) signaling, medium access control (MAC) signaling or layer 1 (L1) signaling. Chen teaches: wherein the information is transmitted based on radio resource control (RRC) signaling, medium access control (MAC) signaling or layer 1 (L1) signaling. ([Chen, 0086] discloses the UE utilizes media access control-control element MAC-CE or radio resource control (RRC) signaling) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having both the teachings of Zeng, Sattiraju, and Chen to use the method of transmitting information based on RRC signaling or MAC signaling of Chen to implement the machine learning based communication method of Zeng. The suggestion and/or motivation for doing so is to improve the security of the machine learning based communication system by verifying the sender's frame check sequences. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Zeng in view of Sattiraju in view of Aragao in view of Yuan and further in view of Chronopoulos et al. (Chronopoulos et al, “Turbo Coded OFDM with Large Number of Subcarriers”, 2012, hereinafter ‘Chronopoulos’) Regarding Claim 8, Zeng in view of Sattiraju in view of Aragao and further in view of Yuan teaches: The method of claim 7, wherein the different input data are generated based on a [Sattiraju, page 1, right col, line 31 - page 2, left col, line 2; Fig. 1] and [Sattiraju, page 2, left col, line 19-26] collectively discloses a parallel Deep Learning encoder network separated by an interleaver. The LTE uses two 8-state identical Recursive Systematic Convolutional encoders that are concatenated in parallel. The interleaver is used to scramble the bits and to provide different input data to each neural network) However, Zeng in view of Sattiraju in view of Aragao and further in view of Yuan does not specifically disclose: The method of claim 7, wherein the different input data are generated based on a plurality of interleavers. Chronopoulos teaches: The method of claim 7, wherein the different input data are generated based on a plurality of interleavers. ([Chronopoulos, page 162, right col, line 1-26; Figure 2] discloses using 2 random interleavers in parallel to generate different input data for each Conv. Encoder) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having both the teachings of Zeng, Sattiraju, Aragao, Yuan and Chronopoulos to use the method of generating separate input data for each encoder using a plurality of interleaver of Chronopoulos to implement the machine learning based communication method of Zeng. The suggestion and/or motivation for doing so is to improve the accuracy of the machine learning based communication system by spreading burst errors into single bit errors which can be easily corrected through error correction codes. Response to Arguments Response to Arguments under 35 U.S.C. 102 and 103 Arguments: Applicant asserts that Sattiraju does not disclose or suggest: wherein the first neural network encoder includes an interleaver, a recursive systematic convolutional (RSC) code, and an accumulator, and wherein, based on a difference between weights applied to an input data sequence input to the first neural network encoder being less than or equal to a specific value, the input data sequence is passed through the accumulator to make the difference between the weights greater than the specific value. Examiner’s Response: First, the examiner notes that the broadest reasonable interpretation of the ‘accumulator’ is ‘a device/circuit that gather, collect, or pile input value’. The shift register in Sattiraju can be interpreted as the accumulator as the shift register gathers the input data and pile the input value in the memory. Regarding ‘and wherein, based on a difference between weights applied to an input data sequence input to the first neural network encoder being less than or equal to a specific value, the input data sequence is passed through the accumulator to make the difference between the weights greater than the specific value’, applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUN KWON whose telephone number is (571)272-2072. The examiner can normally be reached Monday – Friday 7:30AM – 4:30PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUN KWON/Examiner, Art Unit 2127 /JEREMY L STANLEY/Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Jan 05, 2023
Application Filed
Dec 15, 2025
Non-Final Rejection — §102, §103, §112
Mar 18, 2026
Response Filed
Apr 02, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602569
EXTRACTING ENTITY RELATIONSHIPS FROM DIGITAL DOCUMENTS UTILIZING MULTI-VIEW NEURAL NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602609
UPDATING MACHINE LEARNING TRAINING DATA USING GRAPHICAL INPUTS
2y 5m to grant Granted Apr 14, 2026
Patent 12579436
Tensorized LSTM with Adaptive Shared Memory for Learning Trends in Multivariate Time Series
2y 5m to grant Granted Mar 17, 2026
Patent 12572777
Policy-Based Control of Multimodal Machine Learning Model via Activation Analysis
2y 5m to grant Granted Mar 10, 2026
Patent 12493772
LAYERED MULTI-PROMPT ENGINEERING FOR PRE-TRAINED LARGE LANGUAGE MODELS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
84%
With Interview (+46.2%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month