DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 08/11/2023 and 11/04/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Objections
Claim 10 is objected to because of the following informalities: Line 3 recites “the first output that is produced by the model” which Examiner suggests amending to “the first output that is produced by the cloud-based model”. Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 7 and 8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chen et al. (CN 111625361, see translated version).
With regards to claim 7, Chen et al. discloses a method comprising:
obtaining, by an edge device, a model from a server system that has been trained to identify events of interest when applied to data that is generated by the edge device (Para. 0049 lines 6-7, 0060 line 1, 0077 lines 1-3, 0083 lines 1-3, "deployed to IoT devices" "prediction results"); and
tuning, by the edge device, parameters of the model so as to create a local version of the model that is adapted for an environment in which the edge device is deployed (Para. 0062 lines 6-8, 0078 lines 1-2, "iteratively train" "personalized neural network model for IoT devices").
With regards to claim 8, Chen et al. discloses the method of example 7, further comprising: monitoring, by the edge device, the data that is generated over time so as to discover a shift in content that is not temporary in nature (Para. 0081 lines 1-5 and 10-16, "changes are reflected" "adapt" "better performance").
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-6, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (CN 111625361, see translated version) in view of Shinohara et al. (US 2018/0349709).
With regards to claim 1, Chen et al. discloses a surveillance system comprising:
a server system (Para. 0046 lines 1-2, 0048 lines 1-2, "cloud server") that is configured to –
obtain data that is labelled (Para. 0048 lines 1-2, 0051 lines 1-4, "publicly available dataset" "training label"),
train a model by providing the data to the model as training data (Para. 0048 lines 1-2, 0049 line 6, "trained" "BranchyNet model"), and
cause transmission of the trained model to an edge device to be deployed in an environment to be surveilled (Para. 0049 lines 6-7, 0060 line 1, "deployed to IoT devices"); and
the edge device (Para. 0062 lines 1-2, "IoT device") that is configured to –
generate a first data of the environment (Para. 0062 lines 1-2, 0078 lines 1-2, "locally generated private data"), and
tune parameters of the trained model based on an analysis of the first data so as to create a local version of the trained model that is adapted for the environment (Para. 0062 lines 6-8, 0078 lines 1-2, "iteratively train").
Chen et al. does not explicitly teach a camera as the edge device and does not explicitly teach the data to be images that are labelled to indicate that a given object is contained, the trained model to detect instances of the given object, and the first data to be a first series of images.
However, Shinohara et al. discloses a similar concept of a system comprising edge devices and a server where the edge devices are cameras and tune parameters of a model received from a server that detects instances of a given object based on an analysis of image data of the environment generated from a respective camera so as to create a local version of the model, and where teacher data used in training are labelled to indicate that a given object is contained (Para. 0034 lines 1-6, 0056 lines 1-5, 0057 lines 3-11, 0066 lines 15-23, 0096 lines 1-6, 0119 lines 1-8, 0120 lines 1-4, "plurality of cameras" "learning" "teacher data" "including an object"). While Chen et al. discloses an edge device as an IoT (Internet of Things) device and generally discloses training a model using labelled data and obtaining data by the IoT device to use in tuning parameters of the trained model, Shinohara et al. more specifically teaches a similar concept but in the field of object detection, where the edge device is a camera and more specifically discloses training a model using labelled image data indicating that a give object is contained therein, where the model detects instances of the given object and where the camera obtains images to use in tuning parameters of the trained data. In both cases, data is captured using an edge device and used to tune parameters of a model to create a local version of the model on the edge device. Thus, the system of Chen et al. would be modified to be applied in the field of object detection, where the edge device is a camera, data that is labeled to be used in training are images labelled to indicate a given object is contained therein, the trained model detects instances of the given object, and data generated by the edge device are images.
It would have been obvious for one of ordinary skill in the art before the effective filing date to modify Chen et al. to replace the general system comprising the server system and the edge device into the more specific object detection surveillance system comprising a server system and a camera as taught by Shinohara et al. since one of ordinary skill in the art would have been able to carry out such a substitution and the results from the substitution would be predictable to use the data captured by the edge device to tune parameters of the trained model received from the server system to create a local version of the trained model.
With regards to claim 4, the combination of Chen et al. and Shinohara et al. discloses the surveillance system of example 1, wherein the camera is further configured to - transmit information regarding the tuned parameters to the server system (Chen et al.: Para. 0079 lines 1-2, 0080 lines 1-3, "sent to the cloud server", see also claim 1 rejection above on the combination to disclose the camera).
With regards to claim 5, the combination of Chen et al. and Shinohara et al. discloses the surveillance system of example 1, wherein the camera is one of multiple cameras to which the server system causes transmission of the trained model, and wherein each camera independently creates a different local version of the trained model (Chen et al.: Para. 0062 lines 6-8, 0077 lines 1-3, 0078 lines 1-2, "personalized neural network model for IoT devices", see also claim 1 rejection above on the combination to disclose the camera).
With regards to claim 6, the combination of Chen et al. and Shinohara et al. discloses the surveillance system of example 5, wherein the multiple cameras are deployed in the environment to be surveilled (Chen et al.: Para. 0062 lines 6-8, 0077 lines 1-3, 0078 lines 1-2, 0089 lines 18-21, "environments", Shinohara et al.: Para. 0034 lines 2-6, "surveillance area", see also claim 1 rejection above on the combination to disclose the camera).
With regards to claim 11, Chen et al. discloses the method of example 7, wherein the edge device is an IoT device (Para. 0062 lines 1-2, "IoT device").
Chen et al. does not explicitly teach wherein the edge device is a camera and wherein the data includes images of the environment.
However, Shinohara et al. discloses a similar concept of a system comprising edge devices and a server system, where the edge devices are cameras that generate images of an environment and tunes parameters of a model received from a server based on an analysis of the images of the environment so as to create a local version of the model (Para. 0119 lines 1-8, 0120 lines 1-4, "plurality of cameras" "learning"). While Chen et al. discloses an edge device as an IoT device which captures data to use to tune parameters of a received model to create a local version of the model, Shinohara et al. teaches an edge device as a camera which captures images to use to tune parameters of a received model to create a local version of the model. In both cases, data is captured using an edge device and used to tune parameters of a model to create a local version of the model.
It would have been obvious for one of ordinary skill in the art before the effective filing date to modify Chen et al. to replace the technique of an IoT device as the edge device with a camera as the edge device as taught by Shinohara et al. since one of ordinary skill in the art would have been able to carry out such a substitution and the results from the substitution would be predictable to use the data captured by the edge device to tune parameters of the trained model to create a local version of the trained model.
Claim(s) 2 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (CN 111625361, see translated version) in view of Shinohara et al. (US 2018/0349709) and further in view of Brower (US 2021/0027103).
With regards to claim 2, the combination of Chen et al. and Shinohara et al. discloses the surveillance system of example 1.
The combination of Chen et al. and Shinohara et al. does not explicitly teach wherein the camera is further configured to - generate a second series of images of the environment, apply the local version of the trained model to each image included in the second series of images to produce a series of outputs, compute a metric that is indicative of confidence in the series of outputs produced by the local version of the trained model, compare the metric to a threshold, and retune the parameters responsive to a determination that the metric falls beneath the threshold.
However, Brower discloses the concept of tuning parameters of a trained model based on an analysis of a first series of images, generating a second series of images of the environment, applying the local version of the trained model to each image in the second series of images, computing a metric that is indicative of confidence in the output, compare the metric to a threshold, and retune the parameters responsive to a determination that the metric falls beneath the threshold in order to more effectively train the model (Para. 0045 lines 1-22, 0046 lines 1-6 and12-22, 0047 lines 3-19, "accuracy" "falls below a threshold" "retrained" "new ground truth data" "seamless").
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include the concept of tuning parameters of a trained model based on an analysis of a first series of images, generating a second series of images of the environment, applying the local version of the trained model to each image in the second series of images, computing a metric that is indicative of confidence in the output, compare the metric to a threshold, and retune the parameters responsive to a determination that the metric falls beneath the threshold as taught by Brower into the surveillance system of the combination of Chen et al. and Shinohara et al. The motivation for this would be to more effectively train the model.
Allowable Subject Matter
Claims 3, 9, and 10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
With regards to claim 3, Chen et al. (CN 111625361) discloses where the edge device tunes the trained model, however, there is no mention of said tuning responsive to a determination that a predetermined number of images of the environment have been captured since the local version of the trained model was last tuned. Brower (US 2021/0027103) discloses the concept of an edge device tuning a trained model responsive to a predetermined number of images that are determined to be undetected object frames, however, that is not the same as tuning the trained model response to a determination that a predetermined number of images of the environment have been captured since the local version of the trained model was last tuned. Thus, while different prior arts disclose parts of the claim, none of the prior arts disclose or have reasonable motivation to combine to disclose all of the limitations of the claim as a whole.
With regards to claim 9, Chen et al. (CN 111625361) discloses the concept of applying the local version of the model to a portion of data to produce a second output, computing a metric related to the second output, and causing a cloud-based model to be applied to the portion of the data to produce a first output, the cloud-based model being more robust than the local version of the model, and altering the parameters of the cloud-based model and local version of the model based on the metric. However, there is no mention of computing a metric indicative of similarity between the first and second outputs and altering the parameters of the local version of the model based on the metric indicative of similarity between the first and second outputs. Thus, while different prior arts disclose parts of the claim, none of the prior arts disclose or have reasonable motivation to combine to disclose all of the limitations of the claim as a whole.
With regards to claim 10, it is dependent on claim 9.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Applicants are directed to consider additional pertinent prior art included on the Notice of References Cited (PTOL 892) attached herewith.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROL W CHAN whose telephone number is (571)272-5766. The examiner can normally be reached 9:30-3:30 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAROL W CHAN/Primary Examiner, Art Unit 2672