DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-11, 18-22, 27-28 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without “significantly more”. Claim(s) 1-11, 18-22, 27-28 is/are directed to Abstract Idea such as an idea standing alone such as an instantiated concept, pan or scheme, as well as a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper for example using measurement received from a mobile device, transmitting from the source relay node to a donor access node.
The apparatus and the method claim 1, 27 and 28 recites limitation, “generating a set of machine learning models for use in a wireless communications device based on one or more compute resource limits associated with a type of the wireless communications device; and deploying the generated set of machine learning models for use by the wireless communications device in performing one or more inferences with respect to wireless communications based on one or more inputs related to received wireless signals at the wireless communications device”. Since the claim is directed to a process and a machine, which is one of the statutory categories of the invention (Step 1: YES).
The claim is then analyzed to determine whether it is directed to any judicial exception. The claim recites generating a set of machine learning models for use in a wireless communications device; and deploying the generated set of machine learning models for use by the wireless communications device in performing one or more inferences with respect to wireless communications. The generating step and deploying step recited in the claim is no more than an abstract idea i.e., mental process of processing data in machine learning model, etc. (Step 2A: Prong One Abstract Idea=Yes).
The claim is then analyzed if it requires an additional elements or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception – i.e., limitation that are indicative of integration into a practical application: improving to the functioning of a computer or to any other technology or technical field. In the current claims, there is no additional elements that would integrate the abstract idea into a practical application (Step 2A: Prong Two Abstract Idea=Yes).
Next the claim as a whole is analyzed to determine if there are additional limitation recited in the claim such that the claim amount to significantly more than an abstract idea. The claim requires the additional limitation of a computer with the central processing unit, memory, a printer, an input and output terminal and a program. These generic computer components are claimed to perform the basic functions of storing, retrieving and processing data through the program that enables. In the current scenario, there are no additional elements that would amount to significantly more than the abstract idea. Therefore, the claim does not amount to significantly more than the abstract idea itself (Step 2B: No). Accordingly, the claim is not patent eligible.
Further, dependent claims do not add any positive limitation or step that recite within the scope of the claim and does not carry patentable weight they are also rejected for the same reasons as independent claims.
However, if applicant add limitation from claim 12-17 into independent claims, it would overcome the rejection.
Claim 28 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
As per Claim 28, the claim is drawn to computer readable medium. However, according to the specification See specifically Para 141 where it defines that computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another which covers transitory medium and non-transitory medium, and transitory medium can be a signal which does not fall within one of the four statutory classes of 35 U.S.C § 101. Therefore, claim 28 is directed to non-statutory subject matter.
.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-11, 18-22, 27, 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Di Pietro et al. Pub. No. US 20210092026 A1 in view of He et al. Pub. No. US 20190086988 A1
Regarding Claim 1, Di Pietro teaches a method (fig. 3 and 4 and Para 38 and 70, architecture 400 for training a machine learning model for on-premise execution in a network where Fig. 5 and Para 104 disclose device (e.g., device 200) may perform procedure 500 by executing stored instructions (e.g., process 248) to provide a network assurance service (e.g., service 302 i.e., method) for wireless communications (Fig. 3 and Para 38, architecture 300 may support both wireless and wired network, as well as LLNs/IoT networks), comprising:
generating a set of machine learning models for use in a wireless communications device (Fig. 5 Step 515 and Para 105, the service may generate a machine learning model for on-premise execution in a particular computer network to detect network issues in the particular network i.e., generating a set of machine learning models for use in the wireless communications device); and
deploying the generated set of machine learning models for use by the wireless communications device (Fig. 5 Step 520 and Para 106, the service may deploy the generated machine learning model to the particular computer network for on-premise execution) in performing one or more inferences with respect to wireless communications based on one or more inputs related to received wireless signals at the wireless communications device (Para 106, the service may do so by assigning a score to a trained model based in part on a number of emulated network issues detected by the trained model and/or based in part on user feedback. In turn, the service may compare the scores of its trained models to select one of the trained models for on-premise execution in the particular network based on its assigned score and deploy that model to the particular network i.e., in performing one or more inferences with respect to wireless communications based on one or more inputs related to received wireless signals at the wireless communications device).
Di Pietro does not specifically teach based on one or more compute resource limits associated with a type of the wireless communications device.
However, in the same field of endeavor, He teaches device status may indicate a current resource capacity of the wireless communication device, measured by parameters including, for example, the battery power level, a measure of a processor load, a measure of memory use, a measure of a network connection quality, and/or other types of measures of the resource capacity of the wireless communication device. Different machine learning models may be associated with different expected resource use. For example, a first machine learning model may use less battery power, processor time, memory, and/or network bandwidth, etc., and a second machine learning model may use more battery power, processor time, memory, and/or network bandwidth, etc. The smart engine may select a machine learning model based on the determined device status to match an expected resource use of the selected machine learning model with the current resource capacity of the wireless communication device i.e., compute resource limits associated with a type of the wireless communications device (Para 24).
Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Di Pietro with the method of He so as to improve the functioning of the wireless communication device by conserving resources when resource capacity is low and by using more accurate machine learning models when resource capacity is high (See He Para 24).
Regarding Claim 2, Di Pietro does not specifically teach wherein the one or more compute resource limits comprise one or more total compute resource limits defined for the generated set of machine learning models.
However, in the same field of endeavor, He teaches that the device status may indicate a current resource capacity of the wireless communication device, measured by parameters including, for example, the battery power level, a measure of a processor load, a measure of memory use, a measure of a network connection quality, and/or other types of measures of the resource capacity of the wireless communication device. Different machine learning models may be associated with different expected resource use. For example, a first machine learning model may use less battery power, processor time, memory, and/or network bandwidth, etc., and a second machine learning model may use more battery power, processor time, memory, and/or network bandwidth, etc. The smart engine may select a machine learning model based on the determined device status to match an expected resource use of the selected machine learning model with the current resource capacity of the wireless communication device i.e., the one or more compute resource limits comprise one or more total compute resource limits defined for the generated set of machine learning models (Para 24).
Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Di Pietro with the method of He so as to improve the functioning of the wireless communication device by conserving resources when resource capacity is low and by using more accurate machine learning models when resource capacity is high (See He Para 24).
Regarding Claim 3, Di Pietro does not specifically teach wherein the total one or more compute limits defined for the generated set of machine learning models comprises one or more of a first compute limit associated with total computational complexity, a second compute limit associated with total memory usage, and a third compute limit associated with a total number of machine learning models executed simultaneously.
However, in the same field of endeavor, He teaches that the device status may indicate a current resource capacity of the wireless communication device, measured by parameters including, for example, the battery power level, a measure of a processor load, a measure of memory use, a measure of a network connection quality, and/or other types of measures of the resource capacity of the wireless communication device. Different machine learning models may be associated with different expected resource use. For example, a first machine learning model may use less battery power, processor time, memory, and/or network bandwidth, etc., and a second machine learning model may use more battery power, processor time, memory, and/or network bandwidth, etc. The smart engine may select a machine learning model based on the determined device status to match an expected resource use of the selected machine learning model with the current resource capacity of the wireless communication device i.e., wherein the total one or more compute limits defined for the generated set of machine learning models comprises one or more of a first compute limit associated with total computational complexity, a second compute limit associated with total memory usage, and a third compute limit associated with a total number of machine learning models executed simultaneously (Para 24).
Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Di Pietro with the method of He so as to improve the functioning of the wireless communication device by conserving resources when resource capacity is low and by using more accurate machine learning models when resource capacity is high (See He Para 24).
Regarding Claim 4, Di Pietro does not specifically teach wherein the first compute limit comprises a maximum number of operations the wireless communications device can process over a time period.
However, in the same field of endeavor, He teaches from Fig. 8 that device status may be determined (block 815). For example, smart engine 420 may obtain device status information from device status module 430, such as a device mode, a battery level value, a processor load value, a memory use value, a network connection quality value, one or more application criticality values for an application running on UE device 110, and/or other types of device status information. Additionally, smart engine 420 may obtain device data from data acquisition module 450. The information from data acquisition module 450 may be used to determine whether particular parameters have changed within a time period, such as a last time a machine learning process was performed. A determination may be made as to whether to perform machine learning (block 820). As an example, smart engine 420 may determine whether machine learning needs to be performed based on a change in one or more parameters associated with the requested machine learning process i.e., the first compute limit comprises a maximum number of operations the wireless communications device can process over a time period (Para 110-111).
Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Di Pietro with the method of He so as to improve the functioning of the wireless communication device by conserving resources when resource capacity is low and by using more accurate machine learning models when resource capacity is high (See He Para 24).
Regarding Claim 5, Di Pietro teaches wherein generating the set of machine learning models comprises generating a set of machine learning models with properties that satisfy each of the one or more total compute resource limits (Para 105).
Regarding Claim 6, Di Pietro teaches wherein the one or more compute resource limits comprise one or more per-model compute resource limits (Para 105).
Regarding Claim 7, Di Pietro does not specifically teach wherein the one or more per-model compute resource limits comprises a per-model memory limit against which a maximum memory cost at a layer in a machine learning model is evaluated.
However, in the same field of endeavor, He teaches a first machine learning model may use less battery power, processor time, memory, and/or network bandwidth, etc., and a second machine learning model may use more battery power, processor time, memory, and/or network bandwidth, etc (Para 24).
Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Di Pietro with the method of He so as to improve the functioning of the wireless communication device by conserving resources when resource capacity is low and by using more accurate machine learning models when resource capacity is high (See He Para 24).
Regarding Claim 8, Di Pietro does not specifically teach wherein the one or more compute resource limits comprises a maximum per-model memory cost.
However, in the same field of endeavor, He teaches a first machine learning model may use less battery power, processor time, memory, and/or network bandwidth, etc., and a second machine learning model may use more battery power, processor time, memory, and/or network bandwidth, etc (Para 24).
Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Di Pietro with the method of He so as to improve the functioning of the wireless communication device by conserving resources when resource capacity is low and by using more accurate machine learning models when resource capacity is high (See He Para 24).
Regarding Claim 9, Di Pietro does not specifically teach wherein the one or more compute resource limits are associated with a reference time duration.
However, in the same field of endeavor, He teaches a first machine learning model may use less battery power, processor time, memory, and/or network bandwidth, etc., and a second machine learning model may use more battery power, processor time, memory, and/or network bandwidth, etc (Para 24).
Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Di Pietro with the method of He so as to improve the functioning of the wireless communication device by conserving resources when resource capacity is low and by using more accurate machine learning models when resource capacity is high (See He Para 24).
Regarding Claim 10, Di Pietro teaches wherein the reference time duration comprises a duration of a frame in a wireless communication system (Para 93).
Regarding Claim 11, Di Pietro teaches wherein the reference time duration comprises a portion of a duration of a frame in a wireless communication system (Para 93).
Regarding Claim 18, Di Pietro does not specifically teach further comprising: receiving, from the wireless communications device, information identifying the one or more compute resource limits.
However, in the same field of endeavor, He teaches device status may indicate a current resource capacity of the wireless communication device, measured by parameters including, for example, the battery power level, a measure of a processor load, a measure of memory use, a measure of a network connection quality, and/or other types of measures of the resource capacity of the wireless communication device. Different machine learning models may be associated with different expected resource use. For example, a first machine learning model may use less battery power, processor time, memory, and/or network bandwidth, etc., and a second machine learning model may use more battery power, processor time, memory, and/or network bandwidth, etc. The smart engine may select a machine learning model based on the determined device status to match an expected resource use of the selected machine learning model with the current resource capacity of the wireless communication device (Para 24).
Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Di Pietro with the method of He so as to improve the functioning of the wireless communication device by conserving resources when resource capacity is low and by using more accurate machine learning models when resource capacity is high (See He Para 24).
Regarding Claim 19, Di Pietro does not specifically teach wherein the information identifying the one or more compute resource limits comprises an indication of a type of the wireless communications device, wherein the indicate type of the wireless communications device is associated with a set of compute resource limits.
However, in the same field of endeavor, He teaches device status may indicate a current resource capacity of the wireless communication device, measured by parameters (Para 24).
Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Di Pietro with the method of He so as to improve the functioning of the wireless communication device by conserving resources when resource capacity is low and by using more accurate machine learning models when resource capacity is high (See He Para 24).
Regarding Claim 20, Di Pietro teaches wherein deploying the generated set of machine learning models comprises transmitting, to the wireless communications device, identifiers associated with models in the generated set of models (Para 80).
Regarding Claim 21, Di Pietro teaches wherein deploying the generated set of machine learning models comprises transmitting, to the wireless communications device, parameters associated with each model in the generated set of models (Para 80).
Regarding Claim 22, Di Pietro teaches wherein deploying the generated set of machine learning models comprises configuring a processor associated with the wireless communications device to perform inferences using models in the generated set of models (Para 106).
Regarding Claim 27, it has been rejected for the same reasons as claim 1 and further teaches an apparatus for wireless communication (Fig. 2 Unit 200), comprising: at least one memory (Fig. 2 Unit 240) comprising computer-executable instructions (fig. 2 Unit 245); and one or more processors configured to execute the computer-executable instructions and cause the apparatus to (Para 29, The processor 220 may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures 245).
Regarding Claim 28, it has been rejected for the same reasons as claim 1 and further teaches a computer readable medium having instructions stored thereon for (Fig. 2 Unit 240 having data structure 245).
Allowable Subject Matter
Claims 12-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The prior art reference fail to teach the limitation of “wherein generating the set of machine learning models comprises: until at least one of the one or more compute resource limits is reached: selecting a model from a global set of models, accumulating compute limit statistics for the selected model and models included in the generated set of models, and upon determining that the accumulated compute limit statistics are below the one or more compute resource limits, adding the selected model to the generated set of models; and after at least one of the one or more compute resource limits is reached, discarding remaining models from the global set of models”. These limitation in combination of other element are neither found nor disclosed in prior art as a whole.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bisht et al. Patent No. US 12119981 B2 - Improving software defined networking controller availability using machine learning techniques
Soldati et al. Pub. No. US 20240205101 A1 - INTER-NODE EXCHANGE OF DATA FORMATTING CONFIGURATION
Yang et al. Pub. No. US 20230245000 A1 - MODEL EVALUATION METHOD AND APPARATUS
Yelahanaka Raghuprasad et al. Pub. No. US 20210281491 A1 - COMPRESSED TRANSMISSION OF NETWORK DATA FOR NETWORKING MACHINE LEARNING SYSTEMS
Al-Kabra et al. Pub. No. US 20190199811 A1 - APPLICATION SESSION EVENT INFERENCE FROM NETWORK ACTIVITY
Hegde et al. Pub. No. US 20170339022 A1 - ANOMALY DETECTION AND PREDICTION IN A PACKET BROKER
Iordache Pub. No. US 20160241435 A1 - APPARATUS FOR OPTIMISING A CONFIGURATION OF A COMMUNICATIONS NETWORK DEVICE
WO 2020121084 A1 - SYSTEM AND METHOD FOR IMPROVING MACHINE LEARNING MODEL PERFORMANCE IN A COMMUNICATIONS NETWORK
Running Neural Network Inference on the NIC – 2020
A Joint Learning and Communications Framework for Federated Learning over Wireless Networks -2020
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIZAR N SIVJI whose telephone number is (571)270-7462. The examiner can normally be reached Monday-Friday 7-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alison Slater can be reached at (571) 270-0375. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NIZAR N. SIVJI
Primary Examiner
Art Unit 2647
/NIZAR N SIVJI/ Primary Examiner, Art Unit 2647