DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on is being considered by the examiner.
The information disclosure statement (IDS) submitted on September 12, 2024 (9/12/2024) is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 39 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 39 is dependent on Claim 38 which has been canceled, the full scope of the claim unclear.
Claims 2, 7, 10, 31, 33, 35, 36, 39, and 40 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2 and 31 disclose the limitation “{NWDAF} collocated with at least one of” and then claims ten different network entities in the alternative, but then the claims further require the NWDAF is deployed “a PCC,” and the second network node is either “the MME,” or “the AMF”. This language is contradictory to the alternative language immediately preceding it, and makes it unclear whether the first and second nodes are collocated with each other or how the limitations should be interpreted if nodes other than those in the wherein clause are invoked, rendering the claim indefinite. Claims 7 and 36, which are dependent on Claims 2 and 31 require the NWDAF be deployed at an O-RAN node and the second network node is required to be a “Non-RT RIC.” This further contradicts the claims they depend upon, making the limitations indefinite.
In the same vein, claim 10 and claims 33, 35, 39, and 40 are rejected as they are dependent on claims 7 and 31 respectively, and are indefinite. Where there is a great deal of confusion and uncertainty as to the proper interpretation of the limitations of a claim, it would not be proper to reject such a claim on the basis of prior art. As stated in In re Steele, 305 F.2d 859, 134 USPQ 292 (CCPA 1962), a rejection under 35 U.S.C. 103 should not be based on considerable speculation about the meaning of terms employed in a claim or assumptions that must be made as to the scope of the claims. MPEP 2173.06 Lack of an art rejection for a claim is not considered to be an indication of allowability at this juncture.
Claim Rejections - 35 USC § 102
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 10, 13, 17, 29-30, 35, 39 and 42 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hu et al. (CN 110505688).
Regarding Claim 1, Hu discloses a method at a first network node for facilitating a second network node in paging a user equipment (UE), the method comprising: collecting, from one or more network nodes, paging information for the UE; (Pg. 10, “Step 301: obtaining the data sample of UE; Here, the data sample is the history paging area information of the UE”), determining a machine learning (ML) model at least partially based on the paging information;
(Pg. 11, “For example, it is possible to select a machine learning algorithm, training the data samples to obtain a linear regression model, and then using the obtained model to predict the paging optimization information (paging area) in a specific time period”), and transmitting, to the second network node the determined ML model and/or a configuration that is derived from the ML model for use by the second network node in paging the UE (Pg. 5 “In the above scheme, the system further comprising: a UDM, UDR, PCF said AMF is further used for: obtains the paging optimization information from one of the following network elements: NWDA;” and Pg. 10 “Here, the determined paging optimization information is used for AMF to determine paging range and paging the UE according to the paging range;”).
Regarding Claim 10, Hu further teaches, the ML model is trained for extracting at least one of: - network user-level traffic space-time distribution; - user mobility characteristics and/or models; - user service types and/or models; and - user experience prediction models.
(Pg. 11, “For example, it is possible to select a machine learning algorithm, training the data samples to obtain a linear regression model, and then using the obtained model to predict the paging optimization information (paging area) in a specific time period”).
Regarding Claim 13, Hu teaches the invention of Claim 1, further disclosing the paging information comprises at least one of: - mobility information for one or more UEs comprising the UE (100); - statistical paging information for the one or more UEs, wherein the statistical paging information comprises at least one of a paging success ratio in each paging phase, a number of paging messages in each paging phase, and paging attempts in each paging phase;
- core network information indicating relationship between each tracking area (TA) and NB/gNB for a core network to which the first network node belongs; and- supplemental information that facilitates a Mobility Management Entity (MME) or an Access and Mobility Function (AMF) in linking the ML model to an Operation and Maintenance (OAM) configuration (Pg. 10 "The time addition, during practical application, UE id a periodic registration timer is according to the behaviour of the UE (e.g., mobility of the UE) to update. For example, the time when especially frequent mobility of the UE, the UE id of the periodic registration timer needs to be shortened, the UE moving frequently, the time the UE id of the periodic registration timer may be longer, i.e., the cell in the cell list and the tracking area list can be updated once for a long time").
Regarding Claim 17, Hu discloses the step of determining the ML model for the UE comprises: analyzing mobility information for the UE; (Pg. 10 "The time addition, during practical application, UE id a periodic registration timer is according to the behaviour of the UE (e.g., mobility of the UE) to update. For example, the time when especially frequent mobility of the UE, the UE id of the periodic registration timer needs to be shortened, the UE moving frequently, the time the UE id of the periodic registration timer may be longer, i.e., the cell in the cell list and the tracking area list can be updated once for a long time") evaluating statistical paging information to simulate paging at one or more confidence levels; (Pg. 11, “For example, it is possible to select a machine learning algorithm, training the data samples to obtain a linear regression model, and then using the obtained model to predict the paging optimization information (paging area) in a specific time period”), and determining the ML model for the UE at least partially based on the analyzed mobility information and/or the evaluated statistical paging information (Pg. 11, “For example, it is possible to select a machine learning algorithm, training the data samples to obtain a linear regression model, and then using the obtained model to predict the paging optimization information (paging area) in a specific time period”).
Regarding Claim 29, Hu discloses a first network node configured to facilitate a second network node in paging a user equipment (UE), the first network node comprising: (Pg. 5 “NWDA, for data sample for obtaining UE; said data sample is the historical paging area information of the UE according to the network slice granularity, using the data sample, and combining the machine learning algorithm, determining the UE paging optimization information within a certain time period, moving the characterization of optimizing information determining the UE within a specific time period;”), a processor; (Pg. 5 “The embodiment of the present invention further claims a network equipment, comprising: a second processor”), a memory storing instructions which, when executed by the processor, cause the first network node to: (Pg. 5 “The embodiment of the invention further claims a network equipment, comprising: a second processor and second memory of computer program for storage can be run on a processor, wherein the second processor is configured to run the computer program, any one side step executes the NWDA method”), collect, from one or more network nodes, paging information for the UE; (Pg. 10 “Step 301: obtaining the data sample of UE; Here, the data sample is the history paging area information of the UE”), determining a machine learning (ML) model at least partially based on the paging information; (Pg. 11 “For example, it is possible to select a machine learning algorithm, training the data samples to obtain a linear regression model, and then using the obtained model to predict the paging optimization information (paging area) in a specific time period”), and transmit, to the second network node, the determined ML model and/or a configuration that is derived from the ML model for use by the second network node in paging the UE (Pg. 5 “In the above scheme, the system further comprising: a UDM, UDR, PCF said AMF is further used for: obtains the paging optimization information from one of the following network elements: NWDA;” and Pg. 10 “Here, the determined paging optimization information is used for AMF to determine paging range and paging the UE according to the paging range;”).
Regarding Claim 30, Hu teaches a method at a second network node for paging a user equipment (UE), the method comprising: receiving, from a first network node, a machine learning (ML) model and/or a configuration that is derived from the ML model, for paging the UE; (Pg. 5 “said AMF is further used for: obtains the paging optimization information from one of the following network elements: NWDA;”), determining a paging profile at least partially based on the received ML model and/or configuration; (Pg. 10 “Here, the determined paging optimization information is used for AMF to determine paging range and paging the UE according to the paging range; The determined optimization information characterizes the movement of the UE within a particular time period”), and initiating a paging procedure for the UE at least partially based on the determined paging profile (Pg. 10 “Here, the determined paging optimization information is used for AMF to determine paging range and paging the UE according to the paging range; The determined optimization information characterizes the movement of the UE within a particular time period”).
Regarding Claim 35, Hu further teaches, receiving, from a network management system, a paging profile for updating the paging profile stored at the second network node (Pg. 10 “Here, the determined paging optimization information is used for AMF to determine paging range and paging the UE according to the paging range; The determined optimization information characterizes the movement of the UE within a particular time period”).
Regarding Claim 39, Hu further teaches, the ML model is trained for extracting at least one of: - network user-level traffic space-time distribution; - user mobility characteristics and/or models; - user service types and/or models; and - user experience prediction models.
(Pg. 11, “For example, it is possible to select a machine learning algorithm, training the data samples to obtain a linear regression model, and then using the obtained model to predict the paging optimization information (paging area) in a specific time period”).
Regarding Claim 42 Hu teaches a second network node configured for initiating a paging procedure for a user equipment (UE), the second network node comprising: (Pg. 5 “said AMF is further used for: obtains the paging optimization information from one of the following network elements: NWDA;”), a processor; (Pg. 5 “The embodiment of the invention further claims a network device, comprising: a first memory, a first processor and a computer program for storage can be run on a processor;”), a memory storing instructions which, when executed by the processor, the second network node to: (Pg. 5 “wherein the first processor is configured to run the computer program performing the steps the AMF side of any one method”), receive, from a first network node, a machine learning model and/or a configuration that is derived from the ML model, for paging the UE; (Pg. 5 “said AMF is further used for: obtains the paging optimization information from one of the following network elements: NWDA;” and Pg. 10 “step 302: according to network slice particle size, using the data sample, and combining the machine learning algorithm, determining the paging of the UE within a certain time period optimization information”) determine a paging profile at least partially based on the received ML model and/or configuration; (Pg. 10 “Here, the determined paging optimization information is used for AMF to determine paging range and paging the UE according to the paging range; The determined optimization information characterizes the movement of the UE within a particular time period”), and initiate the paging procedure for the UE at least partially based on the determined paging profile (Pg. 10 “Here, the determined paging optimization information is used for AMF to determine paging range and paging the UE according to the paging range; The determined optimization information characterizes the movement of the UE within a particular time period”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Hu et al. (CN 110505688) in view of Stojanovski et al. (US 2019/0191409).
Regarding Claim 4, Hu teaches the invention of Claim 1, further teaching wherein the paging information comprises at least one of: - location information in terms of tracking area (TA), eNB/gNB, or cell; - time information; and- UE service type (Pg. 11 “That is, NWDA uses the history of the UE paging region information collected, comprising a paging range corresponding at a time by the user, based on the machine learning algorithm, prediction UE paging optimisation information within a certain time period, that accurately predicts UE related to mobility mode and regular UE tracking area”). Hu however, does not teach receiving, from a collocated mobility management module, paging information for the UE.
Stojanovski teaches receiving, from a collocated mobility management module, paging information for the UE (Par. [0018] “Other arrangements are possible, including arrangements in which two or more of the gNB-CU 106, CU-CP 107, CU-UP 108, gNB-DU 109 are co-located” and Par. [0069] “In accordance with some embodiments, a gNB 105 of a 3GPP network may include: memory; and processing circuitry. The gNB 105 may be configured with logical nodes. The logical nodes may include a gNB-CU 106 and a gNB-DU 109. The processing circuitry may decode a first paging message, wherein the first paging message is received at the gNB-CU 106 from an access management function (AMF) entity”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Stojanovski’s co-located nodes receiving paging information with Hu’s paging information including paging region and corresponding time to improve communications between a UE and network node.
Regarding Claim 33, Hu further teaches wherein the paging information comprises at least one of: - location information in terms of tracking area (TA), eNB/gNB, or cell; - time information; and- UE service type (Pg. 11 “That is, NWDA uses the history of the UE paging region information collected, comprising a paging range corresponding at a time by the user, based on the machine learning algorithm, prediction UE paging optimisation information within a certain time period, that accurately predicts UE related to mobility mode and regular UE tracking area”). Hu however, does not teach transmitting, to the collocated NWDAF, paging information for the UE.
Stojanovski teaches transmitting, to the collocated NWDAF, paging information for the UE (Par. [0018] “Other arrangements are possible, including arrangements in which two or more of the gNB-CU 106, CU-CP 107, CU-UP 108, gNB-DU 109 are co-located” and Par. [0069] “In accordance with some embodiments, a gNB 105 of a 3GPP network may include: memory; and processing circuitry. The gNB 105 may be configured with logical nodes. The logical nodes may include a gNB-CU 106 and a gNB-DU 109. The processing circuitry may decode a first paging message, wherein the first paging message is received at the gNB-CU 106 from an access management function (AMF) entity”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Stojanovski’s co-located nodes receiving paging information with Hu’s paging information including paging region and corresponding time to improve communications between a UE and network node.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Hu et al. CN (110505688) in view of Liu et al. (US 2022/0361262).
Regarding Claim 11, Hu teaches the invention of Claim 1, but does not teach the first network node is an AI server that is located separately from the second network node and wherein the paging information that has been collected is anonymized.
Liu teaches the first network node is an AI server that is located separately from the second network node and wherein the paging information that has been collected is anonymized (Par. [0022] “The AI entity could be described as a logical function entity that enables intelligent control and optimization of Radio Access Network (RAN) elements and resources via data collection. The AI entities can be in different geographic locations, integrated in different RAN nodes, or as separate entities, e.g., AI servers”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Liu’s first node being an AI server that is located separately with Hu’s network node system to support a much wider range of use-case characteristics and provide a more complex and sophisticated range of access requirements and flexibilities.
Claims 18 is rejected under 35 U.S.C. 103 as being unpatentable over Hu et al. (CN 110505688) in view of Kumar et al. (US 2022/0377844).
Regarding Claim 18, Hu teaches the invention of Claim 17, but does not teach an initial configuration of the ML model is configured by an OAM module, and wherein the method further comprises: providing the OAM module with at least one of history of confidence levels, performance of the current paging procedure, and suggestion for paging profiles.
Kumar teaches teach an initial configuration of the ML model is configured by an OAM module, and wherein the method further comprises: providing the OAM module with at least one of history of confidence levels, performance of the current paging procedure, and suggestion for paging profiles (Par. 0089] “The OAM 602 may subsequently transmit, at 620, a model training response and a model training configuration to the model repository 612 (e.g., associated with aggregated weights and/or updated model rules)”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Kumar’s OAM module with Hu’s system of network nodes to better improve 5G NR technologies and the telecommunication standards that employ them.
Claims 20 and 46 are rejected under 35 U.S.C. 103 as being unpatentable over Hu et al. (CN 110505688) in view of Narayanan et al. (US 2024/0187127).
Regarding Claim 20, Hu teaches the invention of Claim 1, but does not teach training the ML model based on a cost function that is determined at least partially based on an amount of signaling for successfully paging the UE and/or a paging latency.
Narayanan teaches training the ML model based on a cost function that is determined at least partially based on an amount of signaling for successfully paging the UE and/or a paging latency (Par. [0242] "a WTRU may be configured to adapt AI processing such that the AI model performance may be traded-off to achieve one or more desired objective. For example, the AI model may learn from experience (e.g., observing data and/or environment) over a period of time. The performance of an AI model may evolve over a time period. A WTRU may adapt AI processing wherein the adaption may lead to a reduction in one or more of the following: a power consumption, memory usage, a latency, overhead or processing requirement(s), for example, at the cost of a reduction in AI model inference performance and/or an increase in signaling overhead. For example, when AI processing may be applied to wireless functions (e.g., one or more of the following: channel estimation, demodulation, RS measurements, HARQ, CSI feedback, positioning, beam management etc.), it may be possible to perform granular adjustment to tradeoff a model performance to achieve an objective. For example, the WTRU may adapt the processing to accomplish one or more of the following: a reduction in power consumption, a reduction memory/storage utilization, a reduction in Latency, or a reduction in processing power (e.g., computational resources)").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Narayanan’s AI training based on latency with Hu’s machine learning model receiving UE paging information to improve the performance of Hu’s machine learning model.
Regarding Claim 46, Hu teaches the invention of Claim 29, but does not teach training the ML model based on a cost function that is determined at least partially based on an amount of signaling for successfully paging the UE and/or a paging latency.
Narayanan teaches training the ML model based on a cost function that is determined at least partially based on an amount of signaling for successfully paging the UE and/or a paging latency (Par. [0242] "a WTRU may be configured to adapt AI processing such that the AI model performance may be traded-off to achieve one or more desired objective. For example, the AI model may learn from experience (e.g., observing data and/or environment) over a period of time. The performance of an AI model may evolve over a time period. A WTRU may adapt AI processing wherein the adaption may lead to a reduction in one or more of the following: a power consumption, memory usage, a latency, overhead or processing requirement(s), for example, at the cost of a reduction in AI model inference performance and/or an increase in signaling overhead. For example, when AI processing may be applied to wireless functions (e.g., one or more of the following: channel estimation, demodulation, RS measurements, HARQ, CSI feedback, positioning, beam management etc.), it may be possible to perform granular adjustment to tradeoff a model performance to achieve an objective. For example, the WTRU may adapt the processing to accomplish one or more of the following: a reduction in power consumption, a reduction memory/storage utilization, a reduction in Latency, or a reduction in processing power (e.g., computational resources)").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Narayanan’s AI training based on latency with Hu’s machine learning model receiving UE paging information to improve the performance of Hu’s machine learning model.
Allowable Subject Matter
Claims 21-28 and 47 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Antipolis (Sophia Antipolis, "TR 23.791: Update of Solution 5 to Avoid Biased Data Sample" August 20-24 2018, SA WG2, Pgs. 1-3 (Year: 2018)) discloses “For example, the NWDAF derives the conclusion that a UE or a group UEs statistically have a higher paging failure at gNB 1 on weekday between 9-10am. If the RAN paging failure possibility exceeds a certain limit, the NWDAF notifies the network” (Pg. 3, Par. 6.5.1.1 General). Jaeseong et al. (Jaeseong Jeong et al., "Mobility Prediction for 5G Core Networks," March 2021, IEEE Communications Standards Magazine, Pgs. 58-61 (Year: 2021)) discloses “In our proposed method, ML-assisted adaptive paging, we replace step 2 with a mobility prediction model, instead of paging a list of previously visited gNBs, we page gNBs based on predicted future trajectories” (Pg. 59, Par. Use Case 1: Paging).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH NGHIA DINH whose telephone number is (571)272-5607. The examiner can normally be reached Mon. - Fri. 7:30AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Appiah can be reached at 5712727904. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.N.D./Examiner, Art Unit 2641
/MARGARET G WEBB/Primary Examiner, Art Unit 2641