DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 2, 5, 6, 26 and 27 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Athreya et al (US 20230048206).
As to claim 1, Athreya discloses a method (FIGS. 1-2) comprising:
determining, by a device, a power saving criteria associated with an amount of power consumed by the device for performing analytics (see [0025], determining 102 the environmental condition … indicate an illumination condition; see [0026], The apparatus may control 104 a machine learning model structure based on the environmental condition to control (or regulate, for example) apparatus power consumption associated with a processing load of the machine learning model structure … The apparatus may control 104 the machine learning model structure based on the environmental condition by controlling the number of machine learning models (e.g., neural networks) and/or the number of machine learning model components the machine learning model structure; FIG. 2, step 202; see [0030], L1, L2, … L5);
selecting, by the device, from among two or more neural networks configured for performing the analytics under different power saving criteria, a neural network that is configured for performing the analytics under the power saving criteria (see [0031], controlling 104 the machine learning model may include selecting a machine learning model or models from a machine learning model ensemble … The machine learning model ensemble may include multiple machine learning models (e.g., pre-trained deep neural networks (DNNs)), from which machine learning model or models are selected to reduce apparatus power consumption during inferencing; see [0049]; FIG. 2, steps 204-206); and
using the neural network to perform the analytics by the device (see [0051]).
As to claim 2, Athreya further discloses wherein the device comprises a camera (see [0071]), wherein the analytics comprises image or video analytics (see [0051]).
As to claim 5, Athreya further discloses wherein the power saving criteria is associated with an amount of data captured by the camera under different environmental or scene complexity conditions (see [0026]).
As to claim 6, Athreya further discloses wherein determining the power saving criteria comprises receiving a selection of a power saving mode via a user interface on the camera (see [0025]).
As to claim 26, Athreya discloses a computing device (FIG. 3) comprising:
a memory storing instructions (FIG. 3, memory 306; see [0055]); and
a processor communicatively coupled with the memory and configured to execute the instructions (FIG. 3, processor 304; see [0054]) to:
determine, by a device, a power saving criteria associated with an amount of power consumed by the device for performing analytics (see [0025], determining 102 the environmental condition … indicate an illumination condition; see [0026], The apparatus may control 104 a machine learning model structure based on the environmental condition to control (or regulate, for example) apparatus power consumption associated with a processing load of the machine learning model structure … The apparatus may control 104 the machine learning model structure based on the environmental condition by controlling the number of machine learning models (e.g., neural networks) and/or the number of machine learning model components the machine learning model structure; FIG. 2, step 202; see [0030], L1, L2, … L5);
select, by the device, from among two or more neural networks configured for performing the analytics under different power saving criteria, a neural network that is configured for performing the analytics under the power saving criteria (see [0031], controlling 104 the machine learning model may include selecting a machine learning model or models from a machine learning model ensemble … The machine learning model ensemble may include multiple machine learning models (e.g., pre-trained deep neural networks (DNNs)), from which machine learning model or models are selected to reduce apparatus power consumption during inferencing; see [0049]; FIG. 2, steps 204-206); and
use the neural network to perform the analytics by the device (see [0051]).
As to claim 27, Athreya discloses a non-transitory computer-readable medium storing instructions executable by a processor of a computing device (see [0062]-[0063]), wherein the instructions, when executed, cause to the processor to:
determine, by a device, a power saving criteria associated with an amount of power consumed by the device for performing analytics (see [0025], determining 102 the environmental condition … indicate an illumination condition; see [0026], The apparatus may control 104 a machine learning model structure based on the environmental condition to control (or regulate, for example) apparatus power consumption associated with a processing load of the machine learning model structure … The apparatus may control 104 the machine learning model structure based on the environmental condition by controlling the number of machine learning models (e.g., neural networks) and/or the number of machine learning model components the machine learning model structure; FIG. 2, step 202; see [0030], L1, L2, … L5);
select, by the device, from among two or more neural networks configured for performing the analytics under different power saving criteria, a neural network that is configured for performing the analytics under the power saving criteria (see [0031], controlling 104 the machine learning model may include selecting a machine learning model or models from a machine learning model ensemble … The machine learning model ensemble may include multiple machine learning models (e.g., pre-trained deep neural networks (DNNs)), from which machine learning model or models are selected to reduce apparatus power consumption during inferencing; see [0049]; FIG. 2, steps 204-206); and
use the neural network to perform the analytics by the device (see [0051]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Athreya et al (US 20230048206) in view of Yang et al (US 20220094847).
As to claim 3, Athreya fails to explicitly disclose wherein determining the power saving criteria comprises selecting between a day-time operation and a night-time operation, wherein the two or more neural networks comprise:
a first neural network configured for performing the image or video analytics during the day-time operation; and
a second neural network configured for performing the image or video analytics during the night-time operation.
However, Yang teaches wherein determining the power saving criteria comprises selecting between a day-time operation and a night-time operation (FIG. 6, day mode at 202 and night mode at 210), wherein the two or more neural networks comprise:
a first neural network configured for performing the image or video analytics during the day-time operation (FIG. 6, step 206; see [0071]); and
a second neural network configured for performing the image or video analytics during the night-time operation (FIG. 6, step 214; see [0072]).
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Athreya using Yang’s teachings to include wherein determining the power saving criteria comprises selecting between a day-time operation and a night-time operation, wherein the two or more neural networks comprise: a first neural network configured for performing the image or video analytics during the day-time operation; and a second neural network configured for performing the image or video analytics during the night-time operation in order to reduce power consumption and to provide a high performance image processing and computer vision pipeline in minimal area and with minimal power consumption (Yang; [0050], [0054]).
As to claim 4, the combination of Athreya and Yang further discloses wherein the camera is configured to capture colored image or video during the day-time operation (Yang; see [0071]), wherein the camera is configured to capture black and white image or video during the night-time operation (Yang; see [0072]), wherein the first neural network has a greater number of nodes or edges as compared to the second neural network (Athreya: see [0035]-[0036]; Yang: see [0072]).
Claim(s) 7-8 and 15-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Athreya et al (US 20230048206) in view of Gigot (US 9906722).
As to claim 7, Athreya fails to explicitly disclose further comprising:
determining, by the camera, whether the image or video analytics performed at the camera has returned a detection result within a threshold period of time; and
placing, by the camera, the image or video analytics in a sleep mode responsive to an absence of any detection results returned by the image or video analytics.
However, Gigot teaches determining, by the camera, whether the image or video analytics performed at the camera has returned a detection result within a threshold period of time (FIG. 7, step 330); and
placing, by the camera, the image or video analytics in a sleep mode responsive to an absence of any detection results returned by the image or video analytics (FIG. 7, step 304).
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Athreya using Gigot’s teachings to include determining, by the camera, whether the image or video analytics performed at the camera has returned a detection result within a threshold period of time; and placing, by the camera, the image or video analytics in a sleep mode responsive to an absence of any detection results returned by the image or video analytics in order to conserve power (Gigot; col. 14, lines 55-64).
As to claim 8, the combination of Athreya and Gigot further discloses further comprising:
determining, by the camera, subsequent to placing the image or video analytics in the sleep mode, whether a motion is detected in a vicinity of the camera (Gigot; FIG. 7, step 308); and
resuming, by the camera, the image or video analytics responsive to detection of the motion in the vicinity of the camera (Gigot; FIG. 7, step 320).
As to claim 15, Athreya fails to explicitly disclose further comprising:
determining, by the camera, whether a peripheral connection of the camera is connected to any peripheral devices; and
placing, by the camera, the peripheral connection in a low power state responsive to determining that the peripheral connection is not connected to any peripheral devices.
However, Gigot teaches determining, by the camera, whether a peripheral connection of the camera is connected to any peripheral devices (col. 7, lines 53-59: motion sensors 102a-102b, the ambient light sensor 104); and
placing, by the camera, the peripheral connection in a low power state responsive to determining that the peripheral connection is not connected to any peripheral devices (col. 7, lines 53-67).
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Athreya using Gigot’s teachings to include determining, by the camera, whether a peripheral connection of the camera is connected to any peripheral devices; and placing, by the camera, the peripheral connection in a low power state responsive to determining that the peripheral connection is not connected to any peripheral devices in order to reduce power consumption (Gigot; col. 7, lines 53-59).
As to claim 16, the combination of Athreya and Gigot further discloses wherein placing the peripheral connection in the low power state comprises turning off the peripheral connection (Gigot; col. 7, lines 53-67: disabling devices).
As to claim 17, the combination of Athreya and Gigot further discloses wherein placing the peripheral connection in the low power state comprises reducing a polling rate of the peripheral connection for data (Gigot; col. 7, lines 53-67: disabling devices).
Claim(s) 9-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Athreya et al (US 20230048206) in view of Lim et al (US 9838641).
As to claim 9, Athreya further discloses further comprising:
receiving, by the camera, via a user interface on the camera, a power saving criteria associated with an amount of power consumed by the camera for performing the image or video analytics (see [0025]-[0026]).
Athreya fails to explicitly disclose performing, by the camera, the image or video analytics on image or video data having a frames per second “FPS” value or an image resolution value configured to meet the power saving criteria.
However, Lim teaches performing, by the camera, the image or video analytics on image or video data having a frames per second “FPS” value or an image resolution value configured to meet the power saving criteria (col. 6, lines 24-51).
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Athreya using Lim’s teachings to include performing, by the camera, the image or video analytics on image or video data having a frames per second “FPS” value or an image resolution value configured to meet the power saving criteria in order to reduce power consumption (Lim; col. 6, lines 17-21).
As to claim 10, the combination of Athreya and Lim further discloses wherein receiving the power saving criteria comprises receiving the FPS value or the image resolution value via the user interface on the camera (Lim; col. 6, lines 24-51).
As to claim 11, the combination of Athreya and Lim further discloses wherein receiving the power saving criteria comprises receiving, via the user interface, a confidence level associated with detection results of the image or video analytics, wherein performing the image or video analytics comprises performing the image or video analytics on the image or video data having the image resolution value configured for reaching the confidence level (Athreya; see [0015], [0033]).
As to claim 12, the combination of Athreya and Lim further discloses wherein receiving the power saving criteria comprises receiving, via the user interface, a responsiveness level for the camera, wherein performing the image or video analytics comprises performing the image or video analytics on the image or video data having the FPS value configured for providing the responsiveness level (Athreya; see [0076]-[0077]; Lim; col. 6, lines 6-51).
Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Athreya et al (US 20230048206) in view of Abramson et al (US 20190042483).
As to claim 13, Athreya fails to explicitly disclose further comprising:
determining, by the camera, whether all streams in a video pipeline of the camera are being used to stream video data output by the camera; and
closing, by the camera, one or more streams and one or more associated buffers responsive to determining that the one or more streams are not being used.
However, Abramson teaches determining, by the camera, whether all streams in a video pipeline of the camera are being used to stream video data output by the camera (see [0049], [0084]); and
closing, by the camera, one or more streams and one or more associated buffers responsive to determining that the one or more streams are not being used (see [0049], [0084]).
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Athreya using Abramson’s teachings to include determining, by the camera, whether all streams in a video pipeline of the camera are being used to stream video data output by the camera; and closing, by the camera, one or more streams and one or more associated buffers responsive to determining that the one or more streams are not being used in order to reduce power consumption (Abramson; [0049]).
As to claim 14, the combination of Athreya and Abramson further discloses further comprising: closing the video pipeline responsive to determining that no streams in the video pipeline are being used (Abramson; [0049], [0084]).
Claim(s) 18-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Athreya et al (US 20230048206) in view of Liu et al (US 20200307455).
As to claim 18, Athreya fails to explicitly disclose further comprising:
determining, by the camera, whether a defogging of a lens of the camera is required; and
controlling, by the camera, a heater configured to defog the lens of the camera responsive to determining that the defogging of the lens of the camera is required.
However, Liu teaches determining, by the camera, whether a defogging of a lens of the camera is required (see [0012]); and
controlling, by the camera, a heater configured to defog the lens of the camera responsive to determining that the defogging of the lens of the camera is required (see [0012]).
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Athreya using Liu’s teachings to include determining, by the camera, whether a defogging of a lens of the camera is required; and controlling, by the camera, a heater configured to defog the lens of the camera responsive to determining that the defogging of the lens of the camera is required in order to perform defogging process of the lens to increase image quality (Liu; [0011]-[0012]).
As to claim 19, the combination of Athreya and Liu further discloses wherein determining whether the defogging is required comprises analyzing a blurriness or a sharpness of images captured by the camera (Liu; [0012]).
As to claim 20, the combination of Athreya and Liu further discloses wherein controlling the heater comprises:
analyzing a blurriness or a sharpness of images captured by the camera (Liu; [0012]); and
controlling a power supplied to the heater based on the blurriness or the sharpness of the images captured by the camera (Liu; [0012]).
As to claim 21, the combination of Athreya and Liu further discloses wherein controlling the heater comprises controlling a power supplied to the heater according to a stored power curve or table stored on the camera (Liu; [0017]).
Claim(s) 22-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Athreya et al (US 20230048206) in view of Kim et al (US 10120428).
As to claim 22, Athreya fails to explicitly disclose further comprising:
displaying, by the camera, a power management dashboard on a user interface of the camera;
wherein the power management dashboard includes one or more power management indicators and one or more user input receivers;
wherein the one or more power management indicators include a central processing unit “CPU” usage indicator and a power consumption indicator configured, respectively, to display real-time measurements of a CPU usage and a power consumption of the camera; and
wherein the one or more user input receivers are configured for receiving user input for selecting a power saving mode for the camera.
However, Kim teaches displaying, by the camera, a power management dashboard on a user interface of the camera (FIG. 18; see FIG. 4, resource manager 411, power manager 412);
wherein the power management dashboard includes one or more power management indicators and one or more user input receivers (FIG. 15, FIG. 18);
wherein the one or more power management indicators include a central processing unit “CPU” usage indicator and a power consumption indicator configured, respectively, to display real-time measurements of a CPU usage and a power consumption of the camera (FIG. 5, CPU monitor 512 and CPU utilization, battery monitor 517 and battery state; FIG. 7, reference consumption current update unit 720, resource manager 730); and
wherein the one or more user input receivers are configured for receiving user input for selecting a power saving mode for the camera (FIG. 15).
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Athreya using Kim’s teachings to include displaying, by the camera, a power management dashboard on a user interface of the camera; wherein the power management dashboard includes one or more power management indicators and one or more user input receivers; wherein the one or more power management indicators include a central processing unit “CPU” usage indicator and a power consumption indicator configured, respectively, to display real-time measurements of a CPU usage and a power consumption of the camera; and wherein the one or more user input receivers are configured for receiving user input for selecting a power saving mode for the camera in order to improve power management in an electronic device (Kim; col. 1, lines 40-41).
As to claim 23, the combination of Athreya and Kim further discloses further comprising streaming, to a building management device, power management information and metadata associated with the one or more power management indicators and the one or more user input receivers (Athreya; [0052], [0062]; Kim; FIGS. 1-2).
Claim(s) 24-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Athreya et al (US 20230048206) in view of Andrei et al (US 20200349681).
As to claim 24, Athreya fails to explicitly disclose wherein the two or more neural networks comprise a generative adversarial network “GAN” that comprises a generator neural network and a discriminator neural network.
However, Andrei teaches wherein the two or more neural networks comprise a generative adversarial network “GAN” that comprises a generator neural network and a discriminator neural network (see [0056], [0094]).
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Athreya using Andrei’s teachings to include wherein the two or more neural networks comprise a generative adversarial network “GAN” that comprises a generator neural network and a discriminator neural network in order to perform image enhancement while conserving available power (Andrei; [0032]).
As to claim 25, the combination of Athreya and Andrei further discloses wherein the different power saving criteria comprises a first power saving criteria and a second power saving criteria, wherein the second power saving criteria allows for consuming more power than the first power saving criteria (Athreya; see [0030] and [0035]), wherein selecting the neural network comprises:
selecting, responsive to the power saving criteria being the first power saving criteria, only the generator neural network, only the discriminator neural network, or a different neural network different than the generator neural network and the discriminator neural network, for performing the analytics (Athreya; see [0040]; Andrei; see [0074]); and
selecting, responsive to the power saving criteria being the second power saving criteria, the GAN for performing the analytics (Athreya; see [0030]; Andrei; see [0057]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BOUBACAR ABDOU TCHOUSSOU whose telephone number is (571)272-7625. The examiner can normally be reached M-F 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at 5712727331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BOUBACAR ABDOU TCHOUSSOU/Primary Examiner, Art Unit 2482