Prosecution Insights
Last updated: April 19, 2026
Application No. 18/739,647

USER EQUIPMENT AND WIRELESS COMMUNICATION METHOD FOR NEURAL NETWORK COMPUTATION

Final Rejection §103
Filed
Jun 11, 2024
Examiner
FOLLANSBEE, KEITH TRAN-DANH
Art Unit
2411
Tech Center
2400 — Computer Networks
Assignee
Acer Incorporated
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
82%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
54 granted / 85 resolved
+5.5% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
45 currently pending
Career history
130
Total Applications
across all art units

Statute-Specific Performance

§101
1.2%
-38.8% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
16.4%
-23.6% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 85 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 12-20 have been amended. Allowable Subject Matter Claims 6, 7, 9, 16, 17, 19 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The reasons for allowance of claims 6, 9, 16, 19 are that the prior art of record neither anticipates, not renders obvious the record combination as a whole; including the limitations of: Claim 6, 16, recites “wherein the scheduling request includes a binary indication to indicate that the scheduling request is for the neural network computation, a request type, a request descriptor, a model identifier, and a size of the neural network computation results”. Claims 9, 19, recites “wherein the message descriptor includes a neural network type, a size of the neural network computation results, an average rate of a transmission of the neural network computation results, and a peak rate of the transmission of the neural network computation results” Claim 7, 17 is objected for being dependent on claim 6, 16. . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2, 11, 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki et al. (US 20220182802) in view Capalija et al. (US 20200401402). Regarding claim 1, 11, Pezeshki teaches A user equipment for neural network computation, comprising: a processor ([0008] “a UE for wireless communication includes a memory and one or more processors coupled to the memory”), performing a neural network computation to generate a plurality of neural network computation results ([0067] “The UEs 120 may locally train the machine learn ing component using training data collected by the UEs , respectively . A UE 120 may train a machine learning component such as a neural network by optimizing a set of model parameters , wín ) , associated with the machine learning component , where n is the federated learning round index”), wherein the neural network computation results are intermediate data of the neural network computation ([0073] “a UE 120 may transmit a compressed set of gradients … where q represents a compression scheme applied to the set of gradients gr ) ( n )), wherein the intermediate data are the neural network computation results corresponding to computation nodes in partial layers of the neural network computation (Fig. 3 “Local Update 1, 2, k 360”, “Aggregate Update 370”, “Update Global Machine Learning Component 380”, [0073-74] “As shown by reference number 360, the UEs 120 may transmit their respective local updates (shown as “local update 1, local update 2, . . . , local update k” … the base station 110 (e.g., using the second communication manager 330) may aggregate the local updates received from the UEs 120. For example, the second communication manager 330 may average the received gradients to determine an aggregated update … . As shown by reference number 380, the second communication manager 330 may update the global machine learning component based on the aggregated updates. In some aspects, for example, the second communication manager 330 may update the global machine learning component by normalizing the local datasets by treating each dataset size, |Dk|, as being equal”, [0079] “the federated learning configuration may be transmitted by an application layer of the base station 410 and/or may originate from an application layer of the base station 410”); and a transmitter, transmitting a data packet to a base station ([0083] “, the UE 405 may determine an update associated with the machine learning component based at least in part on the training . As shown by reference number 425 , the UE 405 may transmit , and the base station 410 may receive , an indication of completion and / or an indication of a number of epochs completed by the UE 405”, ) to perform computation corresponding to computation nodes in remaining layers of the neural network computation (Fig. 3 “Local Update 1, 2, k 360”, “Aggregate Update 370”, “Update Global Machine Learning Component 380”, [0073-74] “As shown by reference number 360, the UEs 120 may transmit their respective local updates (shown as “local update 1, local update 2, . . . , local update k” … the base station 110 (e.g., using the second communication manager 330) may aggregate the local updates received from the UEs 120. For example, the second communication manager 330 may average the received gradients to determine an aggregated update … . As shown by reference number 380, the second communication manager 330 may update the global machine learning component based on the aggregated updates. In some aspects, for example, the second communication manager 330 may update the global machine learning component by normalizing the local datasets by treating each dataset size, |Dk|, as being equal”, [0079] “the federated learning configuration may be transmitted by an application layer of the base station 410 and/or may originate from an application layer of the base station 410”), wherein the descriptor comprises parameters and settings corresponding to the neural network computation results ([0026] “ For example, a local update may include the locally updated machine learning component (e.g., updated as a result of the local training operation), data indicating one or more aspects (e.g., parameter values, output values, weights) of the locally updated machine learning component, a set of gradients associated with a loss function corresponding to the locally updated machine learning component, a set of parameters (e.g., neural network weights) corresponding to the locally updated machine learning component, and/or the like”), and wherein the parameters and settings corresponding to the neural network computation results comprise at least two of: a neural network type ([0072] “ In some aspects, the local update may include an updated set of model parameters w(n), a difference between the updated set of model parameters w(n) and a prior set of model parameters w(n-1), one or more gradients of the set of gradients gk (n), an updated machine learning component (e.g., an updated neural network model), and/or the like”), number of layers in the neural network, a size of the neural network computation results ([0071] “The training output yj may be used to facilitate determining the model parameters w(n) that maximize a variational lower bound function… where |Dk| is the size of the local dataset associated with the UE k. A stochastic gradient descent (SGD) algorithm may be used to optimize the model parameters w(n). The first communication manager 320 may perform one or more SGD procedures to determine the optimized parameters w(n) and may determine the gradients, gk (n)=∇Fk(w(n)), of the loss function F(w). The first communication manager 320 may further refine the machine learning component based at least in part on the loss function value, the gradients, and/or the like”), level of the neural network computation results, a sequence number, and a time stamp. Pezeshki does not explicitly teach wherein the data packet comprises the neural network computation results, a packet header, and a descriptor, wherein the packet header is different from the descriptor and comprises an indicator to indicate that the data packet. Capalija teaches wherein the data packet comprises the neural network computation results, a packet header ([0052] “The packet can be a packet such as packet 210 of FIG. 2 and can include a header 214 and a payload 212.”), and a descriptor ([0052] “For example, the payload can include the instructions to be executed by processing cores or the data for variables in the application code. In the specific example of FIG. 7, the payload can include the operational code and the operand identifiers”), wherein the packet header is different from the descriptor and comprises an indicator to indicate that the data packet ([0052] “the payload can include the operational code and the operand identifiers defined by the compiler in steps S706 and S708, which can in combination define a set of instructions for the packet”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pezeshki to incorporate the teachings of Capalija. One of ordinary skill in the art would have been motivated to make this modification in order to effectively transport data. Regarding claim 2, 12, Pezeshki does not explicitly teach wherein the data packet further comprises: a data payload, comprising the neural network computation results. Capalija teaches wherein the data packet further comprises: a data payload, comprising the neural network computation results ([0026] “In accordance with specific embodiments disclosed herein, those tensors can be packetized by being divided into a large number of packets, such as packets 210, 210 a, 210 b, 210 c, 210 d, 210 e, each having a payload 112, containing computation data … As described herein, these packets 210 can then be used to execute the complex computation, in the illustrated case the complex computation includes the execution of a directed graph representing an ANN using a network of processing cores 250”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pezeshki to incorporate the teachings of Capalija. One of ordinary skill in the art would have been motivated to make this modification in order to effectively transport data. Claim(s) 3, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki in view of Capalija as applied to claim 1, 2, 11, 12 above, and further in view of Matthews et al. (US 10,931,602). Regarding claim 3, 13, Pezeshki teaches is being used to carry the neural network computation results ([0083] “the UE 405 may determine an update associated with the machine learning component based at least in part on the training . As shown by reference number 425 , the UE 405 may transmit , and the base station 410 may receive , an indication of completion and / or an indication of a number of epochs completed by the UE 405”). Pezeshki and Capalija does not teach wherein a Protocol Data Unit (PDU) type is set in the data packet to indicate that the data packet. Matthews wherein a Protocol Data Unit (PDU) type is set in the data packet to indicate that the data packet (col 22 lines 10 -25 “A node 410 may operate on network data at several different layers, and therefore view the same data as belonging to several different types of data units. At a higher level, a node 410 may view data as belonging to protocol data units (“PDUs”) of a certain type, such as packets or data units at any other suitable network level”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Pezeshki in view of Capalija to incorporate the teachings of Matthews. One of ordinary skill in the art would have been motivated to make this modification in order to efficiently transport data. Claim(s) 4, 5,14, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over , Pezeshki in view of Capalija as applied to claim 1, 2, 12, 13 above, and further in view of Ly et al.(US 20220104033). Regarding claim 4, 14, Pezeshki and Capalija does not explicitly teach wherein a Quality of Service (QoS) type is set in the data packet to indicate that the data packet is being used to carry the neural network computation results with corresponding QoS characteristics. Ly teaches wherein a Quality of Service (QoS) type is set in the data packet to indicate that the data packet is being used to carry the neural network computation results with corresponding QoS characteristics ([0061] “A QoS flow is associated with a QoS identifier, which identifies a QoS parameter associated with the QoS flow, and a QoS flow identifier (QFI), which identifies the QoS flow. Policy and charging parameters are enforced at the QoS flow granularity. A QoS flow can include one or more service data flows (SDFs), so long as each SDF of a QoS flow is associated with the same policy and charging parameters”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Pezeshki in view of Capalija to incorporate the teachings of Ly. One of ordinary skill in the art would have been motivated to make this modification in order to efficiently transport data. Regarding claim 5, 15, Pezeshki and Capalija does not explicitly teach wherein the data packet comprises a QoS Flow Identifier (QFI) or a 5G QoS Identifier (5QI). Ly teaches wherein the data packet comprises a QoS Flow Identifier (QFI) or a 5G QoS Identifier (5QI) ([0061] “A QoS flow is associated with a QoS identifier, which identifies a QoS parameter associated with the QoS flow, and a QoS flow identifier (QFI), which identifies the QoS flow. Policy and charging parameters are enforced at the QoS flow granularity. A QoS flow can include one or more service data flows (SDFs), so long as each SDF of a QoS flow is associated with the same policy and charging parameters”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Pezeshki in view of Capalija to incorporate the teachings of Ly. One of ordinary skill in the art would have been motivated to make this modification in order to efficiently transport data. Claim(s) 8, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki in view of Capalija as applied to claim 1, 2, 11, 12 above, and further in view of Min et al. (EP 4169333A1). Regarding claim 8, 18, Pezeshki teaches wherein the message descriptor includes a neural network type ([0072] “ In some aspects, the local update may include an updated set of model parameters w(n), a difference between the updated set of model parameters w(n) and a prior set of model parameters w(n-1), one or more gradients of the set of gradients gk (n), an updated machine learning component (e.g., an updated neural network model), and/or the like”) and a size of the neural network computation results ([0071] “The training output yj may be used to facilitate determining the model parameters w(n) that maximize a variational lower bound function… where |Dk| is the size of the local dataset associated with the UE k. A stochastic gradient descent (SGD) algorithm may be used to optimize the model parameters w(n). The first communication manager 320 may perform one or more SGD procedures to determine the optimized parameters w(n) and may determine the gradients, gk (n)=∇Fk(w(n)), of the loss function F(w). The first communication manager 320 may further refine the machine learning component based at least in part on the loss function value, the gradients, and/or the like”). Pezeshki and Capalija does not teach wherein the transmitter sends an uplink buffer status reports (BSR) message to the base station for the neural network computation, wherein the BSR message includes a message descriptor, wherein the message descriptor includes a neural network type and a size of the neural network computation results. Min teaches wherein the transmitter sends an uplink buffer status reports (BSR) message to the base station for the neural network computation ([00159] “the method performed at a terminal device may comprise: SI 01, predicting a buffer size associated with data to be transmitted; and SI 02, transmitting a buffer state report, BSR, including the predicted buffer size to a network node”), wherein the BSR message includes a message descriptor, wherein the message descriptor includes a neural network type([00182] “the BSR includes the probability, such that the network node may decide whether to give grant to the terminal device, based on the probability”) and a size of the neural network computation results ([0172] “ terminal device may make the predicting as soon as obtaining the data itself or at least some information of the data, such as the size, a type, a content, or a required transmission rate… may predict a buffer size, and transmit a BSR including the predicted buffer size, so as to reduce the latency”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Pezeshki in view of Capalija to incorporate the teachings of Min. One of ordinary skill in the art would have been motivated to make this modification in order to reduce latency. Claim(s) 10, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki in view of Capalija as applied to claim 1, 2, 11, 12 above, and further in view of Park et al.( US 2020/0137675). Regarding claims 10, 20, Pezeshki teaches the neural network computation ([0073] “a UE 120 may transmit a compressed set of gradients … where q represents a compression scheme applied to the set of gradients gr ) ( n )), and a message descriptor ([0083] “the UE 405 may determine an update associated with the machine learning component based at least in part on the training . As shown by reference number 425 , the UE 405 may transmit , and the base station 410 may receive , an indication of completion and / or an indication of a number of epochs completed by the UE 405”).. Pezeshki and Capalija does not teach wherein the transmitter sends a Radio Resource Control (RRC) connection setup message to the base station, wherein the RRC connection setup message includes a binary indication to indicate that the RRC connection setup message is for a Protocol Data Unit (PDU) session type field. Park teaches wherein the transmitter sends a Radio Resource Control (RRC) connection setup message to the base station (Fig. 18, “PDU Session Establishment Request”, [0132] “. The RRC layer performs the role of control ling radio resources between the UE and the network . To this purpose , the UE and the network exchange RRC messages through the RRC layer”), wherein the RRC connection setup message includes a binary indication to indicate that the RRC connection setup message is for a Protocol Data Unit (PDU) session type field ([0015] “Further , the indicator may be included in a request type field , user equipment ( UE ) network capability field , or session management ( SM ) payload type field in the regis tration request message”, [0223] “PDU session : an association between a UE and a data network providing a PDU connection service . Association types include an IP type , an Ethernet type , and a non - IP type ; an association between a UE providing a PDU connectivity service and a data network . The association type may be an Internet protocol ( IP ) , Ethernet , or unstructured”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Pezeshki in view of Capalija to incorporate the teachings of Park. One of ordinary skill in the art would have been motivated to make this modification in order to reduce latency. Response to Arguments Applicant's arguments filed 08/30/2025 have been fully considered but they are not persuasive. Applicant Argument Applicant remarks Pezeshki does not disclose anything about that the IDkI is associated the neural network computation results. Therefore, Applicant respectfully submits that Pezeshki does not disclose "a size of the neural network computation results" so Pezeshki does not disclose wherein the parameters and settings corresponding to the neural network computation results comprise at least two of: a neural network type, number of layers in the neural network, a size of the neural network computation results, level of the neural network computation results, a sequence number, and a time stamp in claim 11. Examiner’s Response Examiner respectfully disagrees. Pezeshki shows, a neural network type (([0072] “ In some aspects, the local update may include an updated set of model parameters w(n), a difference between the updated set of model parameters w(n) and a prior set of model parameters w(n-1), one or more gradients of the set of gradients gk (n), an updated machine learning component (e.g., an updated neural network model), and/or the like”, (Examiner’s Note: the neural network type is for example can be broadly interpreted as updated neural network model) and size of the neural network computation result in ([0071] “The training output yj may be used to facilitate determining the model parameters w(n) that maximize a variational lower bound function… where |Dk| is the size of the local dataset associated with the UE k. A stochastic gradient descent (SGD) algorithm may be used to optimize the model parameters w(n). The first communication manager 320 may perform one or more SGD procedures to determine the optimized parameters w(n) and may determine the gradients, gk (n)=∇Fk(w(n)), of the loss function F(w). The first communication manager 320 may further refine the machine learning component based at least in part on the loss function value, the gradients, and/or the like”). First off the size of the local dataset DK since that is how the input is batched. Furthermore the w(n) runs through the model the w(n) which is the weight is trained in order to optimize the model. Furthermore, the n in w(n) could broadly be interpret as the size because the n relates to the dimension of the weight. The optimize parameter could broadly be interpreted as computation result. The input size of local dataset is used in SGD algorithm to determine the optimized parameters w(n). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEITH TRAN-DANH FOLLANSBEE whose telephone number is (571)272-3071. The examiner can normally be reached 10am -6 pm M-Th. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Derrick Ferris can be reached at 571-272-3123. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.T.F./Examiner, Art Unit 2411 /DERRICK W FERRIS/Supervisory Patent Examiner, Art Unit 2411
Read full office action

Prosecution Timeline

Jun 11, 2024
Application Filed
May 29, 2025
Non-Final Rejection — §103
Aug 30, 2025
Response Filed
Feb 01, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603684
METHOD AND DEVICE FOR COMMUNICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12513029
CARRIER FREQUENCY TRACKING METHOD, SIGNAL TRANSMISSION METHOD, AND RELATED APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12507284
ENHANCED UPLINK POWER CONTROL FOR PHYSICAL RANDOM ACCESS CHANNEL AFTER INITIAL ACCESS
2y 5m to grant Granted Dec 23, 2025
Patent 12476895
DEVICE FOR CONSTRUCTING NEURAL BLOCK RAPID-PROPAGATION PROTOCOL-BASED BLOCKCHAIN AND OPERATION METHOD THEREOF
2y 5m to grant Granted Nov 18, 2025
Patent 12463907
VALIDATING NETWORK FLOWS IN A MULTI-TENANTED NETWORK APPLIANCE ROUTING SERVICE
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
82%
With Interview (+18.6%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 85 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month