Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to the application filed 29 January 2026. Claims 2-4 were previously canceled. Claims 1, 15, and 19 are amended. Claims 1 and 5-20 are pending and have been examined.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 29 January 2026 has been entered.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10 February 2026 is being considered by the examiner.
Response to Arguments
Applicant' s arguments, see pages 9-10, filed 22 December 2025, with respect to the rejection of Claims 1 and 5-20 under 35 U.S.C. 112(a) have been fully considered and are persuasive. The rejection of Claims 1 and 5-20 under 35 U.S.C. 112(a) has been withdrawn.
APPLICANT'S ARGUMENT: Applicant argues (page 9, paragraph 6) that "Applicant has amended claims 1, 15, and 19 for better clarity. ... ¶ Support for the amendment can be found in the specification."
EXAMINER'S RESPONSE: Examiner agrees. The rejection of Claims 1 and 5-20 under 35 U.S.C. 112(a) has been withdrawn.
Applicant's arguments, see pages 10-21, filed 22 December 2025, with respect to the rejection of Claims 1 and 5-20 under 35 U.S.C. 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
APPLICANT'S ARGUMENT: Applicant argues (page 14, paragraph 1) that "A prima facie case of obviousness is not established because the cited references used in Examiner-suggested combination do not teach, suggest, or otherwise enable one of ordinary skill in the art as to all the features of claim 1."
EXAMINER'S RESPONSE: Examiner notes that Applicant's argument is moot. Amended Claim 1 is now rejected in view of Kasaragod in view of Arngren.
APPLICANT'S ARGUMENT: Applicant argues (page 15, paragraph 1) that "The rejection is formed using labored concatenations of pieces of Kasaragod' s teachings, which in their entirety as disclosed tell a different story than the rejection. In the sections from which the rejection quotes disjointed pieces."
Applicant argues (page 16, paragraph 1) that "Kasaragod' s edge device is not sending the data or the request to another edge device but to a provider network, which is structurally and functionally distinct from Kasaragod' s edge device."
Applicant argues (page 17, paragraph 1) that "Kasaragod's provider network is doing the computing and sending the result to an edge device. Again, Kasaragod's provider network is structurally and functionally distinct from Kasaragod's edge device."
Applicant argues (page 18, paragraph 2) that "processing at Kasaragod' s provider network cannot teach or suggest processing at edge device-2 in the manner of claim 1."
EXAMINER'S RESPONSE: Examiner notes that Applicant's argument is moot. Amended Claim 1 is now rejected in view of Kasaragod in view of Arngren.
Claim Rejections - 35 USC § 112(a)
The rejection of Claims 1 and 5-20 under 35 U.S.C. 112(a) is withdrawn.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 9, 15, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Kasaragod, et al. (US 2019/0037040 A1, hereinafter "Kasaragod") in view of Arngren, et al. (US 2019/0370613 A1, hereinafter "Arngren").
Regarding Claim 1, Kasaragod teaches:
a computer-implemented method (Kasaragod, [0026]: "The systems and methods described herein implement techniques for configuring local networks of internet-connectable devices (e.g., IoT devices) to implement data processing models to rapidly generate local results") comprising:
enhancing an accuracy of a conclusion output (Kasaragod, [0026]: "The systems and methods described herein implement techniques for configuring local networks of internet-connectable devices (e.g., IoT devices) to implement data processing models to rapidly generate local results (e.g., local predictions), while also taking advantage of larger, more accurate data processing models running on more powerful devices (e.g., servers of a provider network) that generate more accurate results based on the same data") of a machine learning local artificial intelligence (AI) model (model-1) executing on a first edge computing device (device-1) (Kasaragod, Fig. 13, Edge Device 1300a with Local Model 1304p, and [0139]: "the local network includes edge devices 1300 and tier devices 1302. As shown, a given edge device 1300 or tier device 1302 includes a model 1304. The model 1304 may be operate ... analyzing the data, modifying the data based on the analyzing, and/or generating a result based on the analyzing (e.g., prediction, new data, one or more commands, or some other result," where Kasaragod's local model 1304p corresponds to the instant model-1) in a sensor-based edge network (edge network), the device-1 comprising an on board sensor and configured to receive sensor data from the onboard sensor (Kasaragod, Kasaragod, Fig. 13, Data Collector 122a, and [0048]: "The data collector 122 may include any suitable device for collecting and/or generating data and sending the data to the hub device 100. For example, the data collector 122 may be an environmental sensor device that detects one or more environmental conditions (e.g., temperature, humidity, etc.) and generates and/or collects environmental data based on the detected environmental conditions), the enhancing comprising:
generating in an analytics engine in device-1 a first dataset based on the sensor data (Kasaragod, [0142]: "the edge devices 1300 include data collectors 122. An edge device may collect data 1312 via the data collector. In some embodiments, the tier manager 1306 may receive and/or analyze the data 1312. The tier manager 1306 may determine whether to process the data 1312 using a model 1304 of the edge device or to send the data 1312 to one or more tier devices 1302 of the network 104");
generating in model-1, using the first dataset, a first classification and a first confidence value associated with the first classification (Kasaragod, [0151]: "an edge device or tier device may generate a prediction based on processing of the data 1312 using the model of the edge device or the tier device. The tier manager may then determine whether a confidence level of the prediction is below a threshold confidence level");
performing a classification reinforcement at device-1 (Kasaragod, [0151]: "The tier manager may then determine whether a confidence level of the prediction is below a threshold confidence level. If so, then the tier manager may send the data to a tier device (or another tier device) for processing by a model of the tier device (or other tier device)" and [0152]: "data may be propagated to one or more tier devices until the confidence level for the prediction is not below the threshold level," where Kasaragod's propagation of prediction processing corresponds to the instant reinforcement), by
transmitting from device-1 ... a request (Kasaragod, [0162]: "If the tier manager determines that the confidence level is below the threshold level, then at block 1612, the tier manager sends the data to a tier device," where Kasaragod's sending data reasonably suggests a network request, as in [0154]: "the network interface 206 communicatively couples the edge device 1300 to the local network. Thus, the edge device 1300 may transmit data to and receive data from tier devices via the network interface 206" and [0174]: "In response to determining to process the portions of data 1704 using the higher tier models 1304, the tier managers ... send the results to ... another endpoint (e.g., one or more tier devices 1302 of the next tier)," and where Kasaragod's tier manager of the edge device is structurally and functionally analogous to the result manager of preceding embodiments, as depicted by the 2-tier edge device 400 of Fig. 4 and the n-tier edge/tier device 1300a of Fig. 13) ... to a second edge computing device (device-2) in the edge network (Kasaragod, [0161]: "At block 1606, the model generates a prediction. At block 1504 [sic, read as 1608], the tier manager determines whether a confidence level for the prediction is below a threshold level" and [0162]: "If the tier manager determines that the confidence level is below the threshold level, then at block 1612, the tier manager sends the data to a tier device," where Kasaragod's tier device corresponds to the instant second edge device) ... for a reference confidence value (Kasaragod, [0156]: "the models of tier devices or the provider network may produce more accurate results (e.g., predictions) with higher confidence levels .... Thus, each step up in level may provide a more accurate result with higher confidence levels" and [0159]: "the tier device processes the data using the model of the tier device. ... At block 1516, the tier device sends the result to ... to the ... edge device that it received the data from" and [0164]: "this process may continue until a prediction is generated with a confidence level that is not below the threshold level (e.g., a minimum confidence level)," where Kasaragod's generated confidence level corresponds to the instant confidence value, and where Kasaragod's confidence at or above a threshold corresponds to the instant reference confidence, interpreting the instant reference per [0075] of the instant disclosure "to use as a reference point of comparison");
outputting at device-1 based on a reference classification and the reference confidence value generated on device-2 responsive to the request from a second AI model (model-2) executing on device-2 (Kasaragod, Fig. 15, steps Generate a result 1514 and Send the result to an endpoint 1516, and [0159]: "If the tier manager determines to process the data using a model of the tier device, then at block 1512, the tier device processes the data using the model of the tier device. Then the model of the tier device generates a result at block 1514. At block 1516, the tier device sends the result to an endpoint. For example, the tier device may send the result back to the tier device or edge device that it received the data from and/or the tier device may send the result to one or more other endpoints," where Kasaragod's result endpoint reasonably suggests that prediction results are output to an endpoint), an indication of a confidence difference between the first confidence value and the reference confidence value (Kasaragod, [0085]: "At block 524, the data processing service performs, by a data processing model of the data processing service, one or more operations on the data to generate a result. At block 526, the data processing service modifies the result. ... In some embodiments, the data processing service may modify the result based on a difference between a confidence level for the remote result [i.e., result of edge device] and a confidence level for the result" and [0086]: "In embodiments, the result may be modified by determining one or more delta values based at least on differences between the remote result and the result and then modifying the result by replacing the result with the one or more delta values ( e.g., with a gradient or incremental changes)");
generating at device-1 as an output replacement for the first classification dataset, responsive to the confidence difference exceeding a difference threshold value, a replacement dataset corresponding to the reference classification (Kasaragod, [0174]: "the tier managers 1306 of the tier devices ... send the results to the edge devices .... In embodiments, the one or more edge devices 1300 perform an action and/or generates a command in response to receiving the results. Moreover, in some embodiments, the results are consolidated into one result or a smaller number of results," where Kasaragod's consolidated result corresponds to the instant generated replacement); and
using the reference confidence value to reinforce the first confidence value thereby enhancing the accuracy of the conclusion output of model-1 (Kasaragod, [0151]: "The tier manager may then determine whether a confidence level of the prediction is below a threshold confidence level. If so, then the tier manager may send the data to a tier device (or another tier device) for processing by a model of the tier device (or other tier device)" and [0152]: "data may be propagated to one or more tier devices until the confidence level for the prediction is not below the threshold level," where Kasaragod's propagation of prediction processing corresponds to the instant reinforcement).
Kasaragod teaches enhancing accuracy of an Al model executing on a first edge computing device in a sensor-based edge network by performing classification reinforcement based on a reference classification and the reference confidence value generated by a second AI model executing on a second device.
Kasaragod does not explicitly teach transmitting from device-1 to a second edge computing device (device-2) in the edge network a request for a reference confidence value and a reference classification and the reference confidence value generated on device-2 ... and using a second sensor data captured on device-2 using a second onboard sensor in device-2.
However, Arngren teaches:
transmitting from device-1 to a second edge computing device (device-2) in the edge network a request for a reference confidence value (Arngren, Fig. 3, depicting Forwarding device 330 conditionally requesting a classification from Classifying device 340, and [0058]: "The forwarding communications device 330 ... is a communications device which ... attempts to classify the instance using its local first ML model, and, in response to an unsuccessful classification, transmits a classification request message ... to one or more other communications devices," where Arngren's unsuccessful classification is based on confidence, as in [0057]: "an unsuccessful classification (the calculated confidence level is below the threshold confidence level)") ...
a reference classification and the reference confidence value generated on device-2 (Arngren, [0059]: "The classifying communications device 340 ... is a communications device which ... attempts to classify the instance using its local first ML model, and, in response to a successful classification ... transmits a classification success message comprising the classification" and [0094]: "Classification success message 396 is similar to classification success message 375, and comprises the successful classification of image 110 by the local first ML model of classifying device 340, and may optionally comprise the calculated confidence level") ... and using a second sensor data captured on device-2 using a second onboard sensor in device-2 (Arngren, Fig. 6, step 601A Capture image or record audio, depicting the step of receiving or acquiring locally the item to be classified, and [0128]: "Method 600 comprises acquiring a feature vector representing the instance, classifying 603 the instance by applying the feature vector to a local first ML model of the communications device, and calculating 603 a confidence level for the classification of the instance. If the instance is an object captured by an image or a video frame, acquiring a feature vector representing the instance may comprise ... capturing 601A the image or the video frame using a camera operatively connected to the communications device").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kasaragod regarding enhancing accuracy of an Al model executing on a first edge computing device in a sensor-based edge network by performing classification reinforcement based on a reference classification and the reference confidence value generated by a second AI model executing on a second device with those of Arngren regarding transmitting from device-1 to a second edge computing device (device-2) in the edge network a request for a reference confidence value and a reference classification and the reference confidence value generated on device-2 and using a second sensor data captured on device-2 using a second onboard sensor in device-2.
The motivation to do so would be to facilitate improved classification by a collection of devices by taking advantage of the associated models having been updated based on their device location (Arngren, [0104]: "Selecting one or more other communications devices which are in the same location as, or in proximity of, the selecting device is advantageous since it is likely that the other communications devices have encountered, and successfully classified, a similar instance as the selecting device. ... If image 110 has been classified by classifying device 340 as a certain individual, other communications devices which are, or have been, nearby classifying device 340 may use the successful classification for updating their local first ML models, thereby improving their capability to classify an image of the same face which they subsequently may capture and attempt to classify").
Regarding Claim 15, Kasaragod teaches:
a computer program product comprising one or more computer readable storage media and program instructions collectively stored on the one or more computer readable storage media the program instructions executable by a processor to cause the processor to perform operations (Kasaragod, [0191]: "In the illustrated embodiment, program instructions and data implementing desired functions, such as those methods and techniques described above for the file gateway, object storage system, client devices, or service provider are shown stored within system memory 1920 as program instructions 1925" and [0194]: "system memory 1920 may be one embodiment of a computer-accessible medium configured to store program instructions and data") comprising: precisely those steps recited by the method of Claim 1. Claim 15 is rejected under the same rationale as Claim 1.
Regarding Claim 19, Kasaragod teaches:
a computer system comprising: a first edge computing device (device-1) hosting an analytics engine, the first edge computing device comprising an onboard sensor (Kasaragod, [0153]: "the edge device 1300 includes processor 200, a memory 202, a battery 204, a network interface 206, and one or more data collectors 122. The memory 202 includes a tier manager 1306 and a model 1304"), a processor and one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media (Kasaragod, [0197]: "the methods may be implemented by a computer system that includes a processor executing program instructions stored on a computer-readable storage medium coupled to the processor"); and
a second edge computing device in communication with the first edge computing device over a sensor-based edge network (edge network) (Kasaragod, Fig. 13, where Edge Device 1300a and Tier Device 1302B correspond to the instant first and second devices);
wherein the program instructions are the program instructions executable by the processor to cause the processor to perform operations (Kasaragod, [0197]: "The program instructions may be configured to implement the functionality described herein (e.g., the functionality of the data transfer tool, various services, databases, devices and/or other communication devices, etc.)") comprising: precisely those steps recited by the method of Claim 1. Claim 19 is rejected under the same rationale as Claim 1.
Regarding Claim 9, the rejection of Claim 1 is incorporated.
Arngren further teaches:
receiving a request for the first classification and the first confidence value from a third edge computing device (Arngren, Fig. 3, request 355D and 356 to Forwarding device 330 (first device) from Selection server 210 (third device)); and
transmitting, responsive to the request, the first classification and the first confidence value to the third edge computing device if the confidence difference is greater than the difference threshold value (Arngren, Fig. 3, response 365A and 365B from Forwarding device 330 (first device) to Selection server 210 (third device), and [0087]: "if the calculated confidence level is less than a threshold confidence level, forwarding device 330 ... may be operative to acquire the information identifying one or more other communications devices by transmitting a selection request message 365B for selecting the one or more other communications devices to selection server 310. Selection request message 365B comprises information pertaining to at least one of: ... the result of classifying 363 image 110," where Arngren's classification result reasonably suggests both a classification and a confidence level).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the Kasaragod/Arngren combination regarding generating by the first model a first classification and a first confidence value using the first dataset with the further teachings of Arngren regarding receiving a request for the first classification and the first confidence value from a third edge computing device and transmitting, responsive to the request, the first classification and the first confidence value to the third edge computing device if the confidence difference is greater than the difference threshold value.
The motivation to do so would be to facilitate improved selection of an alternative model when the first model provides an insufficiently confident classification (Arngren, [0024]: "Even if the local first ML model has not been able to successfully classify the instance, i.e., the calculated confidence level is below the threshold confidence level, the classification obtained from the local first ML model may be used in selecting the one or more other communications devices. This is the case since the selection may be performed on a subset of all available communications devices which have local first ML models which are likely to be better suited to successfully classify the instance than the communications device from which the request for classifying the instance was received").
Regarding Claim 16, the rejection of Claim 15 is incorporated. The Kasaragod/Arngren combination teaches:
wherein the stored program instructions are stored in a computer readable storage device in a data processing system (Kasaragod, [0191]: "In the illustrated embodiment, program instructions and data implementing desired functions, such as those methods and techniques described above for the file gateway, object storage system, client devices, or service provider are shown stored within system memory 1920 as program instructions 1925" and [0194]: "system memory 1920 may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include computer-readable storage media or memory media"), and wherein the stored program instructions are transferred over a network from a remote data processing system (Kasaragod, [0194]: "a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 1940").
Claims 5-8 are rejected under 35 U.S.C. 103 as being unpatentable over Kasaragod, et al. (US 2019/0037040 A1, hereinafter "Kasaragod") in view of Pezzillo, et al., "Machine Learning at Edge Devices based on Distributed Feedback" (U.S. Patent Publication 2019/0370687 A1), hereinafter Pezzillo.
Regarding Claim 5, the rejection of Claim 1 is incorporated.
Kasaragod does not explicitly teach determining whether to use the reference confidence value based on metadata received with the reference confidence value.
However, Pezzillo teaches:
determining whether to use the reference confidence value based on metadata received with the reference confidence value (Pezzillo, [0064]: "An ML model picker 410 extracts one or more compiled ML models from the ML model repository 408 and tests each model to confirm that it is both operational and producing prediction results.... If the associated performance metric (e.g., accuracy, click-through, Fl score, precision) demonstrated by the test satisfies a model acceptance condition (e.g., demonstrating an accuracy that exceeds a predetermined threshold or an accuracy that is more accurate than other ML models-'winner selection'), then the ML model picker 410 can replace the previous instance of the ML model with the newly compiled and tested instance of the re-trained ML model," where Pezzillo's performance metric involves confidence, as in [0015]: "In some implementations, such devices use sensors and other interfaces to capture input data (unlabeled observations) that can be input to an ML model for generation of a corresponding label and other associated data (e.g., confidence scores, performance scores)" and where Pezzillo's compiled ML model is delivered as a package that includes model metadata, as in [0027]: "Each instance of the application 105 can request an ML model package containing the ML model(s) and associated metadata appropriate for the application 105, the device, and/or the user").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kasaragod regarding calculating a confidence difference between a first confidence value and a reference confidence value with those of Pezzillo regarding determining whether to use a reference confidence value based on acceptance criteria using model metadata.
The motivation to do so would be to support multi-party, automated model deployment scenarios (Pezzillo, [0023]: "In yet another implementation, the application/ service vendor 112 may not be required to configure any aspects of ML models controlled by the ML model manager 108. In such cases, the ML model manager 108 can automatically deploy ML models to the edge computing devices 102, 104, and 106, and those ML model deployments may be ... determined dynamically by the ML model manager 108").
Regarding Claim 6, the rejection of Claim 5 is incorporated. Pezzillo further teaches:
wherein the determining of whether to use the reference confidence value comprises: parsing the metadata received with the reference confidence value (Pezzillo, [0059]: "In one implementation, ML models are encapsulated in a '.h5' file according to the Hierarchical Data Format (HDF) that supports n-dimensional data sets of complex objects, although other formats may be employed," where parsing is inherent in reading data stored in a structured data format);
comparing a metadata value from the metadata to a stored acceptance value (Pezzillo, [0064]: "An ML model picker 410 extracts one or more compiled ML models from the ML model repository 408 and tests each model to confirm that it is both operational and producing prediction results.... The ML model picker 410 compares the results of the tested model prediction to the known labels. If the associated performance metric ... demonstrated by the test satisfies a model acceptance condition (e.g., demonstrating an accuracy that exceeds a predetermined threshold or an accuracy that is more accurate than other ML models-'winner selection')", where storing of acceptance values is inherent in comparison of multiple ML models); and
accepting, responsive to determining that the metadata matches the stored acceptance value, the reference confidence value (Pezzillo, [0064]: "If the associated performance metric ... demonstrated by the test satisfies a model acceptance condition ... then the ML model picker 410 can replace the previous instance of the ML model with the newly compiled and tested instance of the re-trained ML model").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the Kasaragod/Pezzillo combination regarding calculating a confidence difference between a first confidence value and a reference confidence value with the further teachings of Pezzillo regarding parsing the metadata received with the reference confidence value, comparing a metadata value from the metadata to a stored acceptance value, and accepting, responsive to determining that the metadata matches the stored acceptance value, the reference confidence value.
The motivation to do so would be to facilitate supporting the requirements of multiple customer segments by way of varying delivery modes and acceptance criteria (Pezzillo, [0058]: "A feedback collector 312 of the ML model manager 300 receives ML model feedback data ( e.g., in an ML model feedback package) from an edge computing device 302 via the communications interface .... Example processing may include without limitation allocating the ML model feedback data according to designated audience segments and making the ML model feedback data available to the vendors, if this option is allowed by policy. It should be understood that receipt of ML model feedback data may be continuous, periodic, ad hoc, on-demand, triggered by events at or requests from the edge computing devices, etc.").
Regarding Claim 7, the rejection of Claim 6 is incorporated. Pezzillo further teaches:
wherein the parsing of the metadata comprises extracting a software version from the metadata (Pezzillo, [0027]: "Each instance of the application 105 can request an ML model package containing the ML model(s) and associated metadata appropriate for the application 105, the device, and/or the user" where appropriate for the device is determined as in [0024]: "the ML model manager 108 can register a device profile for the edge computing device, which can be used for ... distribution of ML models for the registered edge computing device" and [0025]: "Example device metadata stored in a device profile for the ML model manager 108 may include without limitation" the entries of the table under the "Device Metadata Item" column, including the software: "Device OS Version, Device App/SDK Version"), and
wherein the comparing of the metadata value to the stored acceptance value comprises comparing the software version to the stored acceptance value (Pezzillo, [0027]: "Audience segmentation may be ... automatically defined by the ML model manager 108 based on edge computing device criteria"), wherein the stored acceptance value comprises data indicative of a compatible software version (Pezzillo, [0025]: "Example device metadata stored in a device profile for the ML model manager 108 may include without limitation" the entries of the table under the "Device Metadata Item" column, including the software: "Device OS Version, Device App/SDK Version").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the Kasaragod/Pezzillo combination regarding calculating a confidence difference between a first confidence value and a reference confidence value with the further teachings of Pezzillo regarding extracting a software version from the metadata and comparing the software version to the stored acceptance value, wherein the stored acceptance value comprises data indicative of a compatible software version.
The motivation to do so would be to facilitate supporting applications and devices with varying requirements and capabilities (Pezzillo, [0027]: "Each instance of the application 105 can request an ML model package containing the ML model(s) and associated metadata appropriate for the application 105, the device, and/or the user").
Regarding Claim 8, the rejection of Claim 6 is incorporated. Pezzillo further teaches:
wherein the parsing of the metadata comprises extracting a node identifier from the metadata (Pezzillo, [0031]: "Different audience segments can be identified by a unique segment identifier, such as a GUID or other unique or multicast transmission scheme" where [0054]: "each ML model is associated with one or more audience segments, so the ML model manager 208 identifies the ML model(s) associated with each audience segment and allocates those ML model(s) to the corresponding edge computing devices in those audience segments... some audience segments are narrowed to a single user or a single edge computing device, in which case the ML model manager 208 identifiers the ML model(s) associated with each user or edge computing device and deploys those ML model(s) to the corresponding edge computing device"), and
wherein the comparing of the metadata value to the stored acceptance value comprises comparing the node identifier to the stored acceptance value (Pezzillo, [0027]: "Audience segmentation may be ... automatically defined by the ML model manager 108 based on edge computing device criteria"), wherein the stored acceptance value comprises data indicative of a reliable node identifier (Pezzillo, [0031]: "Different audience segments can be identified by a unique segment identifier, such as a GUID").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the Kasaragod/Pezzillo combination regarding calculating a confidence difference between a first confidence value and a reference confidence value with the further teachings of Pezzillo regarding comparing the node identifier to the stored acceptance value, and the stored acceptance value comprising data indicative of a reliable node identifier.
The motivation to do so would be to facilitate a model delivery mechanism wherein customer segments take advantage of model training targeting custom applications and devices (Pezzillo, [0031]: "the re-training, redeployment, and feedback can be segmented across the edge computing devices that execute a corresponding application, such that different audience segments of users and/or edge computing devices are executing differently re-trained ML models. In this manner, the users and/or edge computing devices can execute ML models for which the re-training has been targeted for a particular audience of edge computing devices and/or users. Different audience segments can be identified by a unique segment identifier, such as a GUID or other unique or multicast transmission scheme").
Claims 10-14, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kasaragod, et al. (US 2019/0037040 A1, hereinafter "Kasaragod") in view of Arngren, et al. (US 2019/0370613 A1, hereinafter "Arngren") in view of Xuan, et al., "Automated Identification of Security Issues" (US 2020/0389496 A1, hereinafter "Xuan").
Regarding Claim 10, the rejection of Claim 1 is incorporated. The Kasaragod/Arngren combination teaches:
generating, by the first edge computing device, a second dataset based on sensor data from the sensor; determining, by the analytics engine, a second classification for the second dataset and a second confidence value associated with the second classification; comparing the second confidence value to a confidence ... threshold value; and ... determining that the second confidence value is greater than the ... threshold value (Kasaragod, [0094]: "one or more of the edge devices 600 may include a model trainer 612 that generates a local model update 614 and applies the local model update 614 to update the local model 602. For example, an edge device 600 may receive data from the data collector 122 and analyze the data. The edge device 600 may then generate an update 614 to the local model 602 based on the analysis of the data", where model updates are understood to improve confidence, as in [0119]: "the model trainer and/or model training service may be larger, may access a larger training data set, and may generate local model updates and/or local models configured to provide a higher level of accuracy and confidence for results").
The Kasaragod/Arngren combination does not explicitly teach comparing the confidence value to a confidence interval (CI) threshold value and broadcasting responsive to determining that the second confidence value is greater than the CI threshold value, classification data associated with the second classification.
However, Xuan teaches:
comparing the ... confidence value to a confidence interval (CI) threshold value (Xuan, [0046]: "the management service 116 can determine whether a remedial action 136 specified in a compliance policy 129 should be performed. For example, the management service 116 can compare the confidence score calculated previously at step 206 with a confidence score threshold 133 in an applicable compliance policy 129" where [0044]: "the management service 116 can calculate a confidence score representing the certainty or likelihood that the potential security issue is an actual security issue .... the confidence score could be calculated using statistical approaches for calculating confidence intervals"); and
broadcasting (Xuan, [0011]: "FIG. 1 depicts a network environment 100 according to various implementations. The network environment 100 includes a computing environment 103, one or more security devices 106, and one or more client devices 109, which are in data communication with each other via a network 113. ... These networks can include wired or wireless components ... Wireless networks can include ... 802.11 wireless networks ... as well as other networks relying on radio broadcasts"), responsive to determining that the second confidence value is greater than the CI threshold value, classification data associated with the second classification (Xuan, [0046]: "If the confidence score meets or exceeds the confidence score threshold 133, then the management service 116 can determine that a respective remedial action should be performed" and [0048]: "For example, if the remedial action 136 specified blocking a client device 109 from accessing the network 113, then the management service 116 might send a message to the monitoring application 139 of a security device 106").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the Kasaragod/Arngren combination regarding unsupervised training on edge devices using an acceptability threshold with those of Xuan regarding using a confidence interval rather than a confidence rating.
The motivation to do so would be to accommodate taking action dynamically based on varying signals and policies (Xuan, [0020]: "The confidence score threshold 133 can represent a confidence rating, interval or similar measure of certainty in a prediction generated by the management service 116 that a predicted security incident is actually occurring based on an analysis of one or more security signals 126").
Regarding Claim 11, the rejection of Claim 10 is incorporated. The Kasaragod/Arngren/Xuan combination teaches:
wherein the broadcasting comprises broadcasting the classification data to an edge server on an edge network with the first edge computing device (Kasaragod, Fig. 1, where Kasaragod's Hub Device 100 corresponds to the instant edge server).
Regarding Claim 12, the rejection of Claim 10 is incorporated. The Kasaragod/Arngren/Xuan combination teaches:
wherein the broadcasting comprises broadcasting the classification data to the second edge computing device (Kasaragod, Fig. 4, where Kasaragod's Provider Network 102 corresponds to the instant second edge device).
Regarding Claim 13, the rejection of Claim 10 is incorporated. The Kasaragod/Arngren/Xuan combination teaches:
wherein the classification data comprises the second classification and the second confidence value (Kasaragod, [0119]: "the model trainer and/or model training service may be larger, may access a larger training data set, and may generate local model updates and/or local models configured to provide a higher level of accuracy and confidence for results").
Regarding Claim 14, the rejection of Claim 1 is incorporated. The Kasaragod/Arngren combination teaches:
generating, by the first edge computing device, a second dataset based on sensor data from the sensor; determining, by the analytics engine, a second classification dataset comprising a second classification for the second dataset and a second confidence value associated with the second classification; comparing the second confidence value to a confidence ... threshold value; and ... determining that the second confidence value is greater than the ... threshold value (Kasaragod, [0094]: "one or more of the edge devices 600 may include a model trainer 612 that generates a local model update 614 and applies the local model update 614 to update the local model 602. For example, an edge device 600 may receive data from the data collector 122 and analyze the data. The edge device 600 may then generate an update 614 to the local model 602 based on the analysis of the data", where model updates are understood to improve confidence, as in [0119]: "the model trainer and/or model training service may be larger, may access a larger training data set, and may generate local model updates and/or local models configured to provide a higher level of accuracy and confidence for results").
The Kasaragod/Arngren combination does not explicitly teach comparing the confidence value to a confidence interval (CI) threshold value and broadcasting responsive to determining that the second confidence value is greater than the CI threshold value, classification data associated with the second classification.
However, Xuan teaches:
comparing the ... confidence value to a confidence interval (CI) threshold value (Xuan, [0046]: "the management service 116 can determine whether a remedial action 136 specified in a compliance policy 129 should be performed. For example, the management service 116 can compare the confidence score calculated previously at step 206 with a confidence score threshold 133 in an applicable compliance policy 129" where [0044]: "the management service 116 can calculate a confidence score representing the certainty or likelihood that the potential security issue is an actual security issue .... the confidence score could be calculated using statistical approaches for calculating confidence intervals"); and
broadcasting (Xuan, [0011]: "FIG. 1 depicts a network environment 100 according to various implementations. The network environment 100 includes a computing environment 103, one or more security devices 106, and one or more client devices 109, which are in data communication with each other via a network 113. ... These networks can include wired or wireless components ... Wireless networks can include ... 802.11 wireless networks ... as well as other networks relying on radio broadcasts"), responsive to determining that the second confidence value is greater than the CI threshold value, classification data associated with the second classification (Xuan, [0046]: "If the confidence score meets or exceeds the confidence score threshold 133, then the management service 116 can determine that a respective remedial action should be performed" and [0048]: "For example, if the remedial action 136 specified blocking a client device 109 from accessing the network 113, then the management service 116 might send a message to the monitoring application 139 of a security device 106").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the Kasaragod/Arngren combination regarding unsupervised training on edge devices using an acceptability threshold with those of Xuan regarding using a confidence interval rather than a confidence rating.
The motivation to do so would be to accommodate taking action dynamically based on varying signals and policies (Xuan, [0020]: "The confidence score threshold 133 can represent a confidence rating, interval or similar measure of certainty in a prediction generated by the management service 116 that a predicted security incident is actually occurring based on an analysis of one or more security signals 126").
Claims 18 and 20, incorporate substantively the limitations of Claim 14 in computer program product and computer system forms, respectively and are rejected under the same rationale.
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Kasaragod, et al. (US 2019/0037040 A1, hereinafter "Kasaragod") in view of Arngren, et al. (US 2019/0370613 A1, hereinafter "Arngren") in view of Ramachandran, et al. (US 2003/0084145 A1, hereinafter "Ramachandran").
Regarding Claim 17, the rejection of Claim 15 is incorporated. The Kasaragod/Arngren combination teaches:
wherein the stored program instructions are stored in a computer readable storage device in a server data processing system, and wherein the stored program instructions are downloaded in response to a request over a network (Kasaragod, [0189]: "In the illustrated embodiment, computer system 1900 includes one or more processors 1910 coupled to a system memory 1920 via an input/output (I/O) interface 1930. Computer system 1900 further includes a network interface 1940 coupled to I/O interface 1930. In some embodiments, computer system 1900 may be illustrative of servers implementing enterprise logic or downloadable application, while in other embodiments servers may include more, fewer, or different elements than computer system 1900") to a remote data processing system for use in a computer readable storage device associated with the remote data processing system (Kasaragod, [0056]: "the hub device and one or more of its components (e.g., processor and memory) may be relatively lightweight and smaller compared to components (e.g., processor and memory) used by the provider network 102 to implement the data processing service 116 and/or the model 118").
The Kasaragod/Arngren combination does not explicitly teach program instructions to meter use of the program instructions associated with the request and program instructions to generate an invoice based on the metered use.
However, Ramachandran teaches:
program instructions to meter use of the program instructions associated with the request (Ramachandran, Fig. 6B, 231; [0019]: "FIG. 6B is a flowchart of the process to programmably map usage data to metric data"); and program instructions to generate an invoice based on the metered use (Ramachandran, [0142]: "Step 238 also represents the process of using the server and metric data to prepare automated reports and/or invoices that represent the amount of usage of the licensed resources by licensees").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the Kasaragod/Arngren combination regarding stored program instructions for use by a metered remote data processing system with those of Ramachandran regarding use of metric data for automated invoice. The motivation to do so would be to provide a licensing model where clients are billed accurately on a resource-usage basis (Ramachandran, [0001]: "there is a need for an efficient way for vendors to license their programs or other resources to clients on a usage basis").
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Zhang, et al., "Intrusion Detection Techniques for Mobile Wireless Networks," teach a method of cooperative intrusion detection by nodes in a mobile ad-hoc network using a local confidence threshold to signal the strength of evidence, where neighbor nodes are requested for confirmation given weaker evidence.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT N DAY whose telephone number is (703)756-1519. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.N.D./Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122