Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-5, 7-11, 13-19 are pending.
Response to Arguments
Applicant's arguments filed 12/30/2025 have been fully considered but they are not persuasive.
In the Remarks, Applicant argues:
"some parameters (such as initial ML model, data type list, maximum response time window, etc.) to help the local model training for Federated Leaming)" of 3GPP TR 23.700- 91 cannot reasonably correspond to "second data information collected by the second communication apparatus and information related to Al model processing of the second communication apparatus" of claim 1.
Thus, 3GPP TR 23.700-91 does not disclose or suggest "wherein the Al collaboration information comprises: second data information collected by the second communication apparatus and information related to Al model processing of the second communication apparatus"
The examiner respectfully disagrees.
The argument amounts to mere allegations at best without analysis as to why the specific teachings of the reference cannot reasonably corresponding to the claimed limitation.
The claim language is rather vague, offering nothing more about the nature of the second data as well as information related to AI model.
The 3GPP TR 23.700-91 discloses the Server to send initial ML models, data list, time parameters, which squarely correspond to the claimed limitations of “information related to AI model processing of the server” because the initial model was formulated by the server, with further related parameters for said model.
Furthermore, in Steps 8-11, the Client also receives aggregated training result information collected by the Server to perform processing of the AI model.
Thus the claimed limitations in the amendments are met.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 7-11, 13-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over 3GPP TR 23.700-91 V17.0.0 (2020-12) – IDS entry in view of Sharma et al. (US 2022/0167211).
As to claims 1 and 15:
A method and non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium is configured to store non-transitory stores program instructions, the non-transitory instructions configured to be for being executed by at least one processor to perform operations comprising (Section 6.24 - Federated Learning among Multiple NWDAF Instances with Client NWDAF in collaboration with Server NWDAF), collecting, by a first communication apparatus, first data information; (See page 135, “8. Each Client NWDAF collects its local data by using the current mechanism in clause 6.2, TS 23.288 |5].”, i.e. first data information)
receiving, by the first communication apparatus, artificial intelligence (Al) collaboration information from a second communication apparatus; (Page 135, Step 7c, and step 10-11, i.e. Server NWDAF sends to the selected Client NWDAFs that participate in the Federated learning according to steps 7a and 7b including some parameters (such as initial ML model, data type list, maximum response time window, etc.) to help the local model training for Federated Learning, as well as aggregated training results per iteration it collected.)
and processing, by the first communication apparatus, an Al model of the first communication apparatus based on the first data information and the Al collaboration information. (Steps 8-12, during Federated Learning training procedure, each Client NWDAF iteratively process an AI model by training the received initial ML model from the server NWDAF based on its own data collected, and aggregated information sent by the Server)
wherein the Al collaboration information comprises: second data information collected by the second communication apparatus (See page 135, step 10-11, the server aggregates, i.e. collects, training results to form aggregated model information and sends to each Client) and information related to Al model processing of the second communication apparatus. (See page 135, per description in 7c some parameters (such as initial ML model, data type list, maximum response time window, etc.) to help the local model training for Federated Learning).
While 3GPP TR 23.700-91 V17.0.0 refers to the first and second entities in communication as Server/Client NWDAFs without referring to them as the first and second communication devices. However, it is understandable in the art that these Functions are embodied in separate devices.
Sharma, in a related field of federated learning with NWDAFs, discloses that each NWDAFs are embodied in devices (nodes) with process/memory (i.e. CRM) and communication interfaces per ¶0053, 0030, 0090,0092, 0294-0296.
It would have been obvious to one of ordinary skill in the art before the effective filing time of the invention that the NWDAF nodes in 3GPP TR 23.700-91 V17.0.0 are embodied in hardware communication devices. Given Figure 6.24.1.2-1: General procedure for Federated Learning among Multiple NWDAF Instances of 3GPP TR 23.700-91 V17.0.0, or Fig. 1 of Sharma where the central (server) NWDAF and distributed NWDAFs to exchange information over communication links, which indicate they are embodied in separated communication devices, and more so apparent in Sharma (¶0294-0296).
As to claim 7:
A first communication apparatus, comprising: a communication apparatus, and a processing apparatus processor, (Figure 6.24.1.2-1, Client NWDAF) wherein the communication apparatus is configured to collect first data information; (See page 135, “8. Each Client NWDAF collects its local data by using the current mechanism in clause 6.2, TS 23.288 |5].”, i.e. first data information)
the communication apparatus is further configured to receive artificial intelligence (Al) collaboration information from a second communication apparatus; (Page 135, Step 7c, and step 10-11, i.e. Server NWDAF sends to the selected Client NWDAFs that participate in the Federated learning according to steps 7a and 7b including some parameters (such as initial ML model, data type list, maximum response time window, etc.) to help the local model training for Federated Learning, as well as aggregated training results per iteration it collected.) and the processing apparatus processor is configured to process an Al model of the first communication apparatus based on the first data information and the Al collaboration information. (Steps 8-12, during Federated Learning training procedure, each Client NWDAF process an AI model by training and updating iteratively the received initial ML model from the server NWDAF based on its own data collected, as well as the data collected by the Server NWDAF, namely the ML training results aggregated in step 10)
wherein the Al collaboration information comprises: second data information collected by the second communication apparatus (See page 135, step 10-11, the server aggregates, i.e. collects, training results to form aggregated model information and sends to each Client) and information related to Al model processing of the second communication apparatus. (See page 135, per description in 7c some parameters (such as initial ML model, data type list, maximum response time window, etc.) to help the local model training for Federated Learning).
While 3GPP TR 23.700-91 V17.0.0 refers to the first and second entities in communication as Server/Client NWDAFs without referring to them as the first and second communication devices. However, it is understandable in the art that these Functions are embodied in separate devices.
Sharma, in a related field of federated learning with NWDAFs, discloses that each NWDAFs are embodied in devices (nodes) with process/memory and communication interfaces per ¶0053, 0030, 0090,0092, 0294-0296.
It would have been obvious to one of ordinary skill in the art before the effective filing time of the invention that the NWDAF nodes in 3GPP TR 23.700-91 V17.0.0 are embodied in hardware communication devices. Given Figure 6.24.1.2-1: General procedure for Federated Learning among Multiple NWDAF Instances of 3GPP TR 23.700-91 V17.0.0, or Fig. 1 of Sharma where the central (server) NWDAF and distributed NWDAFs to exchange information over communication links, which indicate they are embodied in separated communication devices, and more so apparent in Sharma (¶0294-0296).
As to claims 2, 8, 16:
3GPP TR 23.700-91 V17.0.0 in view of Sharma disclose all limitations of claim 1/7/15, wherein the first communication apparatus comprises:
a first-type communication interface; or a second-type communication interface
(See Sharma, ¶0289,0290, the network node having communication interfaces for exchanging communication with core network or other technology protocols)
wherein the first-type communication interface is useable by the first communication apparatus to receive the AI collaboration information from the second communication apparatus; and the second-type communication interface is configured to transmit the first data information between different functions in the first communication apparatus. (See 3GPP TR 23.700-91 V17.0.0 , Figure 6.24.1.2-1: General procedure for Federated Learning among Multiple NWDAF Instances of 3GPP TR 23.700-91 V17.0.0, or Fig. 1 of Sharma wherein receiving the central (server) NWDAF the AI collaboration information communication links, and to transmit data to other functions via its respective interfaces).
As to claims 3, 9, 17:
3GPP TR 23.700-91 V17.0.0 in view of Sharma disclose all limitations of claim 2/8/16, wherein the first communication apparatus further comprises:
a first AT function, and a first communication function, (See 3GPP TR 23.700-91 V17.0.0 , page 135, step 9 corresponding to Figure 6.24.1.2-1, which shows Client NWDAF has AI function that retrieves ML model from server via its communication function and train the AI)
, and the second communication apparatus comprises:
a second Al function; and the receiving, by the first communication apparatus, the AI collaboration information from the second communication apparatus comprises:
receiving, by the first AT function, the AT collaboration information from the second function through the first-type communication interface. (See 3GPP TR 23.700-91 V17.0.0 , Figure 6.24.1.2-1, page 135, 7c, which shows the server NWDAF has AI function that sends the initial ML model and other ML data. See step 9 corresponding to Figure 6.24.1.2-1, which shows Client NWDAF has AI function that retrieves ML model from server via its communication function and train the AI)
As to claims 4, 10, 18:
3GPP TR 23.700-91 V17.0.0 in view of Sharma disclose all limitations of claim 2/8/16, wherein the first communication apparatus further comprises: a first Al function, and a first communication function; , (See 3GPP TR 23.700-91 V17.0.0 , page 135, step 709 corresponding to Figure 6.24.1.2-1, which shows Client NWDAF has AI function that collects data and retrieves ML model from server via its communication function and train the AI)
and the collecting, by the first communication apparatus, the first data information comprises: collecting, by the first communication function, the first data information; and sending, by the first communication function, the first data information to the first Al function through the second-type communication interface. ((\See 3GPP TR 23.700-91 V17.0.0 , Figure 6.24.1.2-1, page 135, 7c, which shows the client NWDAF has AI function that collects data locally as well as from NRF and NF via its communication interfaces. See step 9 corresponding to Figure 6.24.1.2-1, which shows Client NWDAF has AI function that retrieves ML model from server via its communication function and train the AI)
As to claims 5, 11, 19:
3GPP TR 23.700-91 V17.0.0 in view of Sharma disclose all limitations of claim 1/7/15, wherein the method further comprises: sending, by the first communication apparatus to the second communication apparatus, semantic information obtained through the processing the Al model processing of the first communication apparatus or a result of the processing the Al model processing of the first communication apparatus. (See 3GPP TR 23.700-91 V17.0.0 , page 135, steps 9-10, the Client NWDAF reports results of training of the model to the server NWDAF)
As to claim 13:
3GPP TR 23.700-91 V17.0.0 in view of Sharma disclose all limitations of claim 12, wherein the information related to the AI model processing of the second communication apparatus comprises one or more of the following: semantic information obtained through the AI model processing of the second communication apparatus, action information of the second communication apparatus, reward information obtained through action execution by the second communication apparatus, reward information predicted by the second communication apparatus, or preset target information of the second communication apparatus. (See 3GPP TR 23.700-91 V17.0.0 , page 135, some parameters (such as initial ML model, data type list, maximum response time window, etc.) to help the local model training for Federated Learning, wherein the maximum response time window is the time window of the server that dictates the maximum allowed period of time for response).
As to claim 14:
3GPP TR 23.700-91 V17.0.0 in view of Sharma disclose all limitations of claim 13, wherein the preset target information of the second communication apparatus comprises one or more of the following information: an energy saving target, a quality of service target, or a reliability target of the second communication apparatus. (See 3GPP TR 23.700-91 V17.0.0 , page 135, wherein the maximum response time window is a reliability indicator imposed by the server that dictates the maximum allowed period of time for response).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 10271362 - A datacenter, a communication apparatus, a communication method, and a communication control method in a communication system are provided that can enhance the versatility of a datacenter and a virtual network constructed therein. A communication system includes: a plurality of wireless communication facilities owned by a plurality of network operators, respectively; and a datacenter in which a virtual core network is constructed, wherein the virtual core network implements mobile communication functions by using the plurality of wireless communication facilities..
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUAN M HUA whose telephone number is (571)270-7232. The examiner can normally be reached 10:30-6:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anthony Addy can be reached at 571-272-7795. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/QUAN M HUA/Primary Examiner, Art Unit 2645