Prosecution Insights
Last updated: April 19, 2026
Application No. 18/229,292

METHOD AND APPARATUS FOR TRANSMITTING AND RECEIVING FEEDBACK INFORMATION BASED ON ARTIFICIAL NEURAL NETWORK

Non-Final OA §103
Filed
Aug 02, 2023
Examiner
LALCHINTHANG, VANNEILIAN
Art Unit
2414
Tech Center
2400 — Computer Networks
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
323 granted / 410 resolved
+20.8% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
33 currently pending
Career history
443
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
74.5%
+34.5% vs TC avg
§102
2.6%
-37.4% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 410 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Information Disclosure Statement The information disclosure statement (IDS) submitted on 08/02/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7, 10, 12-16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hao et al. [hereinafter as Hao] U.S 2025/0088232 A1 in view of Oh et al. [hereinafter as Oh] KR20230149914A further in view of Han [hereinafter as Han] U.S 2021/0034971 A1. Regarding claim 1, Hao discloses wherein an operation method of a first communication node (Fig.6&7A-B [0083]-[0084], an operation method of a UE/first communication node), comprising: inputting first input data including first feedback information to a first encoder of a first artificial neural network corresponding to the first communication node (Fig.6&7A-B [0083]-[0084], inputting a channel estimation (H) including a complex matrix (Nt*Nr) on each RB (NRB) and CSI feedback(s) information to a PMI/RI encoder 702/first encoder of a first artificial intelligence neural network corresponding to the UE/first communication node); and transmitting the first feedback signal to a second communication node (Fig.7A-B [0084], the UE is generating the CSI feedback(s) i.e., first feedback signal to a base station/ second communication node). Even though Hao discloses wherein the UE is generating the quantized X bits (b) based on an encoding operation in the PMI/RI encoder 702/first encoder but Hao does not explicitly disclose the claim language of “latent data”, in the same field of endeavor, Oh teaches wherein generating first latent data based on an encoding operation in the first encoder (Fig.3-4 page 5/12 lines 44-54, generating a first sensitive latent expression and a first non-sensitive latent expression i.e., latent data based on an auto encoder of the artificial intelligence model learning device; Fig.2 page 4/12 lines 25-27 and Fig.1&5-6 page 3/12 lines 11-14, the latent representation generator is generating a first sensitive latent expression and a first non-sensitive latent expression i.e., latent data based on an encoding operation using the first encoder); generating a first feedback signal including the first latent data (Fig.1-2&4 page 6/12 lines 33-38, generating a feedback i.e., first feedback signal including the first sensitive latent expression and the first non-sensitive latent expression i.e., the first latent data of the latent expression generation unit 100, providing to at least one of the units 200 and Fig.1&5-6 page 7/12 lines 40-54); and wherein the first latent data included in the first feedback signal is decoded into first restored data corresponding to the first input data in a second decoder of a second artificial neural network corresponding to the second communication node (Fig.2-4 page 5/12 lines 11-43, the first sensitive latent representation and the first non-sensitive latent representation i.e., first latent data included in the feedback (FB) signal is decoded in the DC_1/first decoder into the restored label data e.g., the restored label data use at least one of a sensitive latent representation and a non-sensitive latent representation corresponding to the first input data (e.g., FT_D2, STV_D2, LBL_D2) in a decoder_2/ second decoder of a contrast model second artificial intelligence neural network corresponding to the data restoration unit 200/second communication node; Fig.5 page 7/12 lines 14-30 and Fig.6 page 8/12 lines 34-39 and Fig.5-6 page 6/12 lines 61-63 to 7/12 lines 1-2, Encoder_1 (EC_1) and Decoder_1 (DC_1) in a reference model (RM) i.e., a first artificial intelligence neural network and Encoder_2 (EC_2) and Decoder_2 (DC_2) in a contrast model (CM) i.e., a second artificial intelligence neural network). PNG media_image1.png 762 970 media_image1.png Greyscale Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made to provide to have modified Hao to incorporate the teaching of Oh in order to provide for improving the prediction performance of an artificial intelligence device. It would have been beneficial to use the first sensitive latent representation and the first non-sensitive latent representation i.e., first latent data included in the feedback signal which is decoded in the first decoder into the restored label data e.g., the restored label data use at least one of a sensitive latent representation and a non-sensitive latent representation corresponding to the first input data (e.g., FT_D2, STV_D2, LBL_D2) in a decoder_2/second decoder of a contrast model second artificial intelligence neural network corresponding to the data restoration unit 200/second communication node as taught by Oh to have incorporated in the system of Hao to improve the prediction stability of an artificial intelligence device. (Oh, Fig.1-2 page 3/12 lines 36-37, Fig.2-4 page 5/12 lines 11-43, Fig.5-6 page 6/12 lines 61-63 to 7/12 lines 1-2, Fig.5 page 7/12 lines 14-30 and Fig.6 page 8/12 lines 34-39) However, Hao and Oh do not explicitly disclose wherein the first input data includes first common input data included in a common input data set previously shared between the first communication node and the second communication node. In the same field of endeavor, Han teaches wherein the first input data includes first common input data included in a common input data set previously shared between the first communication node and the second communication node (Fig.1&6 [0104], the first input data includes input global data set included in the global data set is previously e.g., the trained local model, shared between the terminal device 60/first communication node and the server/second communication node; Fig.1 [0010], [0024], Fig.3 [0089] and Fig.1 [0046], an input data set 110 is input to the neural network model 120 to perform a training or an inference function). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made to provide to have modified Hao and Oh to incorporate the teaching of Han in order to provide for improving the accuracy of the updated global model. It would have been beneficial to use the first input data which includes input global data set included in the global data set is previously e.g., the trained local model, shared between the terminal device 60/first communication node and the server/second communication node and, an input data set 110 is input to the neural network model 120 to perform a training or an inference function as taught by Han to have incorporated in the system of Hao and Oh to achieve sufficient accuracy improvements. (Han, Fig.1 [0010][0024], Fig.1 [0046], Fig.1-2 [0088], Fig.3 [0089] and Fig.1&6 [0104]) Regarding claim 2, Hao, Oh and Han disclosed all the elements of claim 1 as stated above wherein Hao further discloses before the inputting of the first input data, receiving information at least on the second encoder of the second artificial neural network from the second communication node (Fig.8&11 [0098]-[0099], before the inputting of the first input data, receiving information e.g., a second cover-code configuration at least on the second encoder of the second machine learning module 1104b of second artificial neural network from the second BS 102b/second communication node); and configuring the first encoder based on information on the second encoder (Fig.7A-B&11 [0085][0098][0099], configuring the first encoder based on information on the second encoder of the second machine learning module 1104b of second artificial neural network from the second BS 102b/second communication node). Additionally, Oh discloses before the inputting of the first input data, receiving information at least on the second encoder of the second artificial neural network from the second communication node (Fig.2&5 page 7/12 lines 14-22, before the inputting of the first input data, receiving information e.g., a second sensitive latent representation and a second non-sensitive latent representation at least on the EC_2/second encoder of the second artificial intelligence neural network from the data restoration unit 200/second communication node); and configuring the first encoder based on information on the second encoder (Fig.2&5 page 7/12 lines 40-50, configuring the EC_1 first encoder based on parameter information on the EC_2 second encoder and Fig.5-6 page 7/12 lines 1-19). Regarding claim 3, Hao, Oh and Han disclosed all the elements of claim 1 as stated above wherein Oh further discloses before the inputting of the first input data, performing a pre-training procedure for pre-training the first artificial neural network (Fig.6 page 9/12 lines 10-20, before the inputting of the first input data, the pre-training prediction procedure is performed for pre-training the artificial intelligence model/first artificial neural network), wherein the pre-training procedure is performed based on a first common latent data set generated in the first communication node based on the common input data set (Fig.6 page 9/12 lines 10-20, the pre-training prediction procedure is performed based on the sensitive latent representation and non-sensitive latent representation/first common latent data set and a sensitive latent representation and non-sensitive latent representation/ first common latent data set generated in the first communication node based on the common input data set (e.g., FT_D, STV_D, LBL_D)), and a second common latent data set generated in the second communication node based on the common input data set (Fig.6 page 9/12 lines 10-20, the pre-training prediction procedure is performed based on a sensitive latent expression and non-sensitive latent expression/ second common latent data set generated in the second communication node based on the common input data set (e.g., FT_D, STV_D, LBL_D) and Fig.7 page 9/12 lines 30-33, the artificial intelligence model training device includes feature data (FT_D), sensitive data (STV_D), label data (LBL_D), sensitive latent representation (STV_LTR), non-sensitive latent representation (NSTV_LTR), restored feature data (rFT_D), and restored sensitive data). Regarding claim 4, Hao, Oh and Han disclosed all the elements of claim 3 as stated above wherein Oh further discloses the performing of the pretraining procedure comprises: generating, by the first encoder, the first common latent data set based on the common input data set (Fig.2-4 page 5/12 lines 11-24, generates the a sensitive latent representation and a non-sensitive latent representation/first latent data and Fig.6 page 9/12 lines 10-20, generating a common latent data set by encoding the common input data set (e.g., FT_D, STV_D, LBL_D) through a first encoder of the first artificial intelligence neural network); receiving, from the second communication node, information on the second common latent data set generated based on the common input data set in the second encoder of the second artificial neural network of the second communication node (Fig.6 page 9/12 lines 10-20, receiving a sensitive latent expression and non-sensitive latent expression/ second common latent data set generated in the second communication node based on the common input data set (e.g., FT_D, STV_D, LBL_D) and Fig.7 page 9/12 lines 30-33, the artificial intelligence model training device includes feature data (FT_D), sensitive data (STV_D), label data (LBL_D), sensitive latent representation (STV_LTR), non-sensitive latent representation (NSTV_LTR), restored feature data (rFT_D), and restored sensitive data); and updating the first artificial neural network based on a relationship between the first and second common latent data sets (Fig.6 page 8/12 lines 22-31, learning/updating the artificial intelligence model/first artificial neural network based on a relationship between the sensitive latent representation (STV_LTR), non-sensitive latent representation (NSTV_LTR)/first and second common latent data sets e.g., sensitive latent expression and non-sensitive latent expression and Fig.6-7 page 9/12 lines 1-33). Regarding claim 7, Hao, Oh and Han disclosed all the elements of claim 1 as stated above wherein Oh further discloses after the transmitting of the first feedback signal, receiving, from the second communication node, information on a third common latent data set generated based on the common input data set in a second encoder of the second artificial neural network of the second communication node (Fig.2-4 page 5/12 lines 11-24, after the transmitting of the first feedback signal, receiving information on a third restored characteristic data and third restored sensitive data/third common latent data set from the data restoration unit 200/second communication node based on the common input data set (e.g., FT_D, STV_D, LBL_D) in a second encoder of the second artificial intelligence neural network of the second communication node and Fig.9 page 10/12 lines 7-8, receiving a third restored characteristic data and third restored sensitive data/third common latent data set); and performing an update procedure for the first artificial neural network based on at least the information on the third common latent data set (Fig.7 page 9/12 lines 30-33, updating the artificial intelligence model learning device based on the information on the third restored characteristic data and third restored sensitive data/third common latent data set and Fig.9 page 10/12 lines 7-8). Regarding claim 10, Hao, Oh and Han disclosed all the elements of claim 1 as stated above wherein Oh further discloses determining whether a feedback procedure based on a fallback mode is required (Fig.2-4 page 5/12 lines 16-27, determining whether a feedback procedure based on a loss function/fallback mode is required); in response to determining that the feedback procedure based on the fallback mode is required, identifying latent variables included in a second common latent data set based on the common input data set at the second communication node (Fig.2-4 page 5/12 lines 16-27, calculating/identifying first sensitive latent representation and the first non-sensitive latent representation i.e., latent variables included in a second common latent data set based on the common input data set (e.g., FT_D2, STV_D2, LBL_D2) at the second communication node in response to determining that the feedback procedure based on the loss function/fallback mode is required); generating second latent data from second input data based on the latent variables (Fig.2-4 page 4/12 lines 57-59 to Fig.2-4 page 5/12 lines 1-5, generating a sensitive latent expression and non-sensitive latent expression/ second common latent data set based on the latent parameter values and Fig.1-2 page 6/12 lines 33-38); generating a second feedback signal including the second latent data (Fig.2-4 page 5/12 lines 16-27, generating a sensitive latent expression and non-sensitive latent expression/ second common latent data and Fig.1-2 page 6/12 lines 33-38); and transmitting the second feedback signal to the second communication node (Fig.2-4 page 5/12 lines 16-27, transmitting the sensitive latent expression and non-sensitive latent expression/ second common latent data to the data restoration unit 200/second communication node and Fig.1-2 page 6/12 lines 33-38). Regarding claim 12, Hao, Oh and Han disclosed all the elements of claim 1 as stated above wherein Hao further discloses the first artificial neural network includes a first converter at a rear end of the first encoder (Fig.2&7A-B [0299]-[0300], the UE/ first artificial neural network includes a first converter at a rear end of the PMI/RI/first encoder). Additionally, Oh discloses the generating of the latent data comprises: generating first intermediate data based on the encoding operation on the first input data in the first encoder (Fig.1-2 page 5/12 lines 31-36, generating first intermediate data based on the first encoder (EC_1) encoding operation on the first input data (e.g., FT_D1, STV_D1, LBL_D1) in the first encoder); and inputting the first intermediate data to the first converter to convert the first intermediate data into the first latent data (Fig.2-4 page 5/12 lines 31-43, inputting the first intermediate data to the first converter to convert the first intermediate data into the first sensitive latent representation and the first non-sensitive latent representation i.e., first latent data and Fig.6-7 page 9/12 lines 1-33). Regarding claim 13, Hao, Oh and Han disclosed all the elements of claim 1 as stated above wherein Hao further discloses before the inputting of the first input data, generating a first converter to be used in the second communication (Fig.2&7A-B [0299]-[0300], before the inputting of the first input data, generating a first converter to be used in the base station/second communication); and transmitting information on the first converter to the second communication node (Fig.2&7A-B [0299]-[0300], transmitting information on the first converter to the base station/second communication). Additionally, Oh discloses wherein the first latent data is converted by the first converter provided from the first communication node before being input to the second decoder at the second communication node (Fig.5 page 7/12 lines 36-45, the first sensitive latent representation and the first non-sensitive latent representation i.e., first latent data is updated by the first converter provided from the latent expression generator 100/first communication node before being input to the DC_2/second decoder at the data restoration unit 200/second communication node and Fig.6 page 8/12 lines 11-13, the data restoration unit 200 includes a first decoder (DC_1), a second decoder (DC_2), a third decoder (DC_3), a fourth decoder (DC_4), a first prediction model (PM_1), and a second prediction model (PM_2)). Regarding claim 14, Hao, Oh and Han disclosed all the elements of claim 1 as stated above wherein Oh further discloses when the first artificial neural network further includes a first decoder and a second converter (Fig.6 page 8/12 lines 11-13, the data restoration unit 200/ first artificial intelligence neural network includes a first decoder (DC_1), a second decoder (DC_2), a third decoder (DC_3), a fourth decoder (DC_4), a first prediction model (PM_1), and a second prediction model (PM_2) a second converter), generating third latent data by inputting third input data to the first encoder (Fig.6 page 8/12 lines 22-31, generating a third reconstructed characteristic data and third reconstructed sensitive data/third common latent data set by inputting third input data to the EC_1/first encoder); generating second intermediate data by inputting the third latent data to the second converter (Fig.6 page 8/12 lines 22-31, generating the second sensitive latent representation and the second non-sensitive latent representation i.e., second latent data by inputting the third reconstructed characteristic data and third reconstructed sensitive data/third common latent data set to the second converter); and generating third output data corresponding to the third input data by inputting the second intermediate data to the first decoder (Fig.9 page 10/12 lines 7-8, transmitting a third restored characteristic data and third restored sensitive data/third output data corresponding to the third reconstructed characteristic data and third reconstructed sensitive data/third common latent data/input data by inputting the second sensitive latent representation and the second non-sensitive latent representation i.e., second latent data to the EC_1/first encoder). Regarding claim 15, Hao, Oh and Han disclosed all the elements of claim 1 as stated above wherein Oh further discloses before the inputting of the first input data, receiving, from the second communication node, information on a second common latent data set generated based on the common input data set in a second encoder of the second artificial neural network of the second communication node (Fig.2-4 page 5/12 lines 11-24, before the inputting of the first input data, receiving information from the data restoration unit 200/second communication node on the second sensitive latent representation and the second non-sensitive latent representation i.e., second latent data set generated based on the common input data set (e.g., FT_D2, STV_D2, LBL_D2) in a second EC_2/second encoder of the second artificial intelligence neural network of the data restoration unit 200/second communication node); and transmitting pre-training request information for pre-training the first artificial neural network to a first entity (Fig.2-4 page 5/12 lines 11-24, transmitting pre-training request information for pre-training prediction procedure the first artificial intelligence neural network to a first entity and Fig.6 page 9/12 lines 10-20, generating a common latent data set for a pre-training prediction procedure), wherein the pre-training request information includes information on the second common latent data set (Fig.6 page 9/12 lines 10-15, the pre-training request information includes information on the second sensitive latent representation and the second non-sensitive latent representation i.e., second common latent data set), and the pre-training is performed by the first entity based on the information on the second common latent data set (Fig.6 page 9/12 lines 10-20, the pre-training prediction procedure is performed by the latent expression generator 100/first entity based on the information on the second sensitive latent representation and the second non-sensitive latent representation i.e., second common latent data set). Regarding claim 16, Hao, Oh and Han disclosed all the elements of claim 1 as stated above wherein Oh further discloses after the transmitting of the first feedback signal, receiving, from the second communication node, information on a third common latent data set generated based on the common input data set in a second encoder of the second artificial neural network of the second communication node (Fig.2-4 page 5/12 lines 11-24, after the transmitting of the first feedback signal, receiving information from the data restoration unit 200/second communication node on the third reconstructed characteristic data and third reconstructed sensitive data/ third latent data set generated based on the common input data set (e.g., FT_D2, STV_D2, LBL_D2) in a second EC_2/second encoder of the second artificial intelligence neural network of the data restoration unit 200/second communication node); and transmitting update request information for updating the first artificial neural network to a first entity (Fig.6 page 8/12 lines 22-31, generating/transmitting a third reconstructed characteristic data and third reconstructed sensitive data/third common latent data set/updated request information for an update procedure for the first artificial intelligence neural network of the first communication node and Fig.7 page 9/12 lines 30-33, updating the artificial intelligence model learning device), wherein the update request information includes information on the third common latent data set, and the updating of the first artificial neural network is performed by the first entity based on the information on the third common latent data set (Fig.6 page 8/12 lines 22-31, the update request information includes information on the third reconstructed characteristic data and third reconstructed sensitive data/ third common latent data set, and the updating of the first artificial intelligence neural network is performed by the first entity based on the information on the third common latent data set and Fig.7 page 9/12 lines 21-33). Regarding claim 18, Hao discloses wherein an operation method of a first communication node (Fig.7A-B [0083]-[0084], an operation method of a UE/first communication node), comprising: receiving a first feedback signal from a second communication node (Fig.7A-B [0083]-[0084], receiving a CSI-RS feedback signal from a BS/second communication node for artificial intelligence and/or machine learning); wherein the first feedback information corresponds to second feedback information generated for a feedback procedure in the second communication node (Fig.7A-B&16 [0084][0085][0140], the CSI feedback(s) i.e., first feedback signal corresponds to the CSI feedback(s’) i.e., second feedback signal generated for a CSI feedback procedures in the base station/second communication node using an adaptive learning-based algorithm). Even though Hao discloses wherein the UE is receiving a CSI-RS feedback signal from a BS/second communication node for artificial intelligence and/or machine learning but Hao does not explicitly disclose the claim language of “latent data”, in the same field of endeavor, Oh teaches wherein obtaining first latent data included in the first feedback signal (Fig.1-2 page 6/12 lines 33-38, obtaining the first sensitive latent expression and the first non-sensitive latent representation expression i.e., first latent data included in the feedback i.e., first feedback signal and Fig.1&5-6 page 7/12 lines 40-54); performing a decoding operation on the first latent data based on a first decoder of a first artificial neural network corresponding to the first communication node (Fig.2-4 page 5/12 lines 11-43, performing a decoding operation on the first sensitive latent representation and the first non-sensitive latent representation i.e., first latent data included in the feedback (FB) signal based on a DC_1/first decoder of a first artificial intelligence neural network corresponding to the latent expression generation unit 100/first communication node); and obtaining first feedback information based on first restored data output from the first decoder (Fig.2-4 page 5/12 lines 11-43, obtaining first feedback information based on first restored label data e.g., the restored label data output from the DC_1/first decoder; Fig.5 page 7/12 lines 14-30 and Fig.6 page 8/12 lines 34-39), the second communication node generates the first latent data included in the first feedback signal by encoding first input data including the second feedback information through a second encoder of a second artificial neural network corresponding to the second communication node (Fig.2-4 page 5/12 lines 11-24, generates the a sensitive latent representation and a non-sensitive latent representation/first latent data included in the first feedback signal and Fig.5 page 7/12 lines 40-54, the first encoder (EC_1) is encoding first input data (e.g., FT_D2, STV_D2, LBL_D2) including the second feedback (FB) information from the loss calculation unit 300 through a second encoder (EC_2) of a second artificial intelligence neural network corresponding to the data restoration unit 200/second communication node; Fig.5 page 7/12 lines 14-30 and Fig.6 page 8/12 lines 34-39 and Fig.5-6 page 6/12 lines 61-63 to 7/12 lines 1-2, Encoder_1 (EC_1) and Decoder_1 (DC_1) in a reference model (RM) i.e., a first artificial intelligence neural network and Encoder_2 (EC_2) and Decoder_2 (DC_2) in a contrast model (CM) is a second artificial intelligence neural network). PNG media_image1.png 762 970 media_image1.png Greyscale Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made to provide to have modified Hao to incorporate the teaching of Oh in order to provide for improving the prediction performance of an artificial intelligence device. It would have been beneficial to generate the a sensitive latent representation and a non-sensitive latent representation/first latent data included in the first feedback signal and, the first encoder (EC_1) is encoding first input data (e.g., FT_D2, STV_D2, LBL_D2) including the second feedback (FB) information from the loss calculation unit 300 through a second encoder (EC_2) of a second artificial neural network corresponding to the data restoration unit 200/second communication node as taught by Oh to have incorporated in the system of Hao to improve privacy-masked predicted network service. (Oh, Fig.1-2 page 3/12 lines 36-37, Fig.2-4 page 5/12 lines 11-24, Fig.5-6 page 6/12 lines 61-63 to 7/12 lines 1-2, Fig.5 page 7/12 lines 14-30, Fig.5 page 7/12 lines 40-54 and Fig.6 page 8/12 lines 34-39) However, Hao and Oh do not explicitly disclose wherein the first input data includes first common input data included in a common input data set previously shared between the first communication node and the second communication node. In the same field of endeavor, Han teaches wherein the first input data includes first common input data included in a common input data set previously shared between the first communication node and the second communication node (Fig.1&6 [0104], the first input data includes input global data set included in the global data set is previously e.g., the trained local model, shared between the terminal device 60/first communication node and the server/second communication node; Fig.1 [0010], [0024], Fig.3 [0089], and Fig.1 [0046], an input data set 110 is input to the neural network model 120 to perform a training or an inference function). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made to provide to have modified Hao and Oh to incorporate the teaching of Han in order to provide for improving the accuracy of the updated global model. It would have been beneficial to use the first input data which includes input global data set included in the global data set is previously e.g., the trained local model, shared between the terminal device 60/first communication node and the server/second communication node and, an input data set 110 is input to the neural network model 120 to perform a training or an inference function as taught by Han to have incorporated in the system of Hao and Oh to achieve sufficient accuracy improvements. (Han, Fig.1 [0010][0024], Fig.1 [0046], Fig.1-2 [0088], Fig.3 [0089] and Fig.1&6 [0104]) Regarding claim 19, Hao, Oh and Han disclosed all the elements of claim 18 as stated above wherein Oh further discloses before the receiving of the first feedback signal, generating a first common latent data set for a pre-training procedure for the second artificial neural network of the second communication node by encoding the common input data set through a first encoder of the first artificial neural network (Fig.2-4 page 5/12 lines 11-24, before the receiving of the first feedback signal, generates the a sensitive latent representation and a non-sensitive latent representation/first latent data and Fig.6 page 9/12 lines 10-20, generating a common latent data set for a pre-training prediction procedure for the second artificial intelligence neural intelligence model network by encoding the common input data set (e.g., FT_D, STV_D, LBL_D) through a first encoder of the first artificial intelligence neural network); and transmitting the first common latent data set to the second communication node (Fig.2-4 page 5/12 lines 11-24, transmits the a sensitive latent representation and a non-sensitive latent representation/first latent data to the data restoration unit 200/second communication node), wherein the pre-training procedure is performed based on the first common latent data set and a second common latent data set generated in the second communication node based on the common input data set (Fig.6 page 9/12 lines 10-20, the pre-training prediction procedure is performed based on the sensitive latent representation and non-sensitive latent representation/first common latent data set and a sensitive latent expression and non-sensitive latent expression/ second common latent data set generated in the second communication node based on the common input data set (e.g., FT_D, STV_D, LBL_D) and Fig.7 page 9/12 lines 30-33, the artificial intelligence model training device includes feature data (FT_D), sensitive data (STV_D), label data (LBL_D), sensitive latent representation (STV_LTR), non-sensitive latent representation (NSTV_LTR), restored feature data (rFT_D), and restored sensitive data). Regarding claim 20, Hao, Oh and Han disclosed all the elements of claim 18 as stated above wherein Oh further discloses after the obtaining of the first feedback information, generating a third common latent data set for an update procedure for the second artificial neural network of the second communication node by encoding the common input data set through a first encoder of the first artificial neural network (Fig.6 page 8/12 lines 22-31, after the obtaining of the first feedback information, generating a third reconstructed characteristic data and third reconstructed sensitive data/third common latent data set for an update procedure for the second artificial intelligence neural network of the second communication node and Fig.7 page 9/12 lines 30-33, updating the artificial intelligence model learning device and Fig.9 page 10/12 lines 7-8, generating a third restored characteristic data and third restored sensitive data/third common latent data set); and transmitting information on the third common latent data set to the second communication node (Fig.2-4 page 5/12 lines 11-24, transmits the third common latent data to the data restoration unit 200/second communication node and Fig.9 page 10/12 lines 7-8, transmitting a third restored characteristic data and third restored sensitive data/third common latent data set), wherein the information on the third common latent data set includes first identification information on the common input data set in a state corresponding to the third common latent data set (Fig.9-10 page 10/12 lines 33-48, the information on the restored characteristic data and third restored sensitive data/third common latent data set includes first identification information on the common input data set in a state corresponding to the third common latent data set), and the first identification information is used to determine whether an update for the second artificial neural network is required in the second communication node (Fig.6-7 page 9/12 lines 30-33, determining whether an update/learn for the artificial intelligence model/second artificial neural network is required in the second communication node and Fig.6 page 9/12 lines 47-53, updating parameters for the second artificial intelligence neural network). Allowable Subject Matter Claims 5, 6, 8, 9, 11 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hong et al. (Pub. No.: US 2022/0029665 A1) teaches Deep Learning based Beamforming Method and Apparatus. Kim et al. (Pub. No.: US 2022/0150129 A1) teaches Adaptive Deep Learning Inference Apparatus and Method in Mobile Edge Computing. Song et al. (Pub. No.: US 2022/0104213 A1) teaches Method of Scheduling Plurality of Packets Related to Tasks of Plurality of User Equipments using Artificial Intelligence and Electronic Device Performing the Method. Xu et al. (Pub. No.: US 2022/0369346 A1) teaches Method, Apparatus, and System for Sending Sidelink Channel State Information Report. Pezeshki et al. (Pub. No.: US 2022/0150727 A1) teaches Machine Learning Model Sharing between Wireless Nodes. Chen et al. (U.S Patent No.: US 11984955 B2) teaches Configurable Neural Network for Channel State Feedback (CSF) Learning. Chakraborty et al. (U.S Patent No.: US 12052583 B2) teaches Radio Mapping Architecture for Applying Machine Learning Techniques to Wireless Radio Access Networks. Schmidt et al. (U.S Patent No.: US 12428034 B2) teaches Method for Classifying a Behavior of a Road User and Method for Controlling an EGO Vehicle. Rezagholizadeh et al. (Pub. No.: US 20200097554 A1) teaches Systems and Methods for Multilingual Text Generation Field. Alabbasi et al. (Pub. No.: US 20240340269 A1) teaches Masking of Privacy Related Information for Network Services. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VANNEILIAN LALCHINTHANG whose telephone number is (571)272-6859. The examiner can normally be reached Monday-Friday 10AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Edan Orgad can be reached at (571) 272-7884. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /V.L/Examiner, Art Unit 2414 /EDAN ORGAD/Supervisory Patent Examiner, Art Unit 2414
Read full office action

Prosecution Timeline

Aug 02, 2023
Application Filed
Feb 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587913
SCG FAILURE HANDLING WITH CONDITIONAL PSCELL CONFIGURATION
2y 5m to grant Granted Mar 24, 2026
Patent 12574969
WIRELESS COMMUNICATION METHOD AND WIRELESS COMMUNICATION TERMINAL
2y 5m to grant Granted Mar 10, 2026
Patent 12513753
DEVICE PAIRING TECHNIQUES
2y 5m to grant Granted Dec 30, 2025
Patent 12506676
METHOD AND APPARATUS FOR SCHEDULING PACKETS FOR TRANSMISSION
2y 5m to grant Granted Dec 23, 2025
Patent 12507105
METHOD FOR CELL MEASUREMENT, TERMINAL DEVICE AND NETWORK DEVICE
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
93%
With Interview (+14.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 410 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month