The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is in response to application filed on 12/23/24, in which Claims 1-17 are presented for examination of which Claims 1, 9 and 17 are in independent form.
Specification
Please include “alarm” into the title.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1, 9 and 17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to mental processes and organizing human activity without adding significantly more. The independent claims recite:
receive information indicating activity within a controlled environment (mental process: notice an individual entering the environment);
determine an event based on the information (mental process: individual looks fatigued);
receive more information (mental process: see a dog chasing the individual);
determine context information (mental process: determine the dog looks dangerous);
modify the event by the context information to generate an alarm (mental process: determine to get help); and
transmit a notification identifying the alarm to a monitoring device (human activity: call for help).
This judicial exception is not integrated into a practical application because there is no particular machine, particular transformation and no meaningful limitations that would amount to significantly more. The claims do not include additional elements.
MPEP 2106.05(e) states that the claim should add meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment to transform the judicial exception into patent-eligible subject matter. MPEP 2106.05(h) states that limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application.
The receiving and determining steps recite an abstract idea.
The modifying and transmitting steps recite the additional elements. The modifying and transmitting limitations are mere insignificant extra solution activity that do no not integrate the abstract idea into a practical application. The recited sensors, video capture devices, monitoring device (Claims 1, 9 and 17), the memory and processor (Claim 9), are generic computer components to the abstract idea which is merely applying an abstract idea “with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea”.
The combination of the recited modifying, transmitting and computer limitations would not integrate the abstract idea into a practical application as it is merely using generic computer components to which the idea is applied to.
The last analysis regarding does the claim recite additional elements that amount to significantly more than the judicial exception is similar to the last analysis except that well-understood, routine and conventional devices are consider as part of the analysis. The modifying and transmitting limitations are merely insignificant extra solution activity so this doesn’t amount to significantly more. The recited sensors, video capture devices, monitoring device, the memory and processor are generic computer components that the abstract idea is applied to, and again the mere applying of an abstract idea “with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea” does not amount to significantly more. The combination of the recited receiving and computer limitations does not amount to significantly more as it is an application to generic computer components to which the idea is applied to.
Dependent Claims 2-8 and 10-16 recite further mental processes and human activity and hence do not add any particular machine, particular transformation or meaningful limitations that would amount to significantly more and therefore they are rejected as well.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 4-6, 8-14 and 16-17 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Stolbikov et al. (Stolbikov; US 20190108742 A1).
Regarding Claim 1, Stolbikov discloses a method (Abstract) comprising:
receiving sensor information captured by one or more sensors, the sensor information indicating activity within a controlled environment ([0040] first set of data sources include sensors that are constantly providing data to an electronic device. Examples of data sources in the first set include…location sensors (e.g., using satellite positioning system and/or inertial measurement), motion sensors (e.g., accelerometers and/or gyroscopes), and biometric sensors (e.g., body temperature sensor, heart rate sensor, etc.), [0100], 605 of Fig 6);
determining an event based on the sensor information ([0101], 610 of Fig 6);
receiving one or more video frames from one or more video capture devices ([0102], [0104], 615 of Fig 6);
determining context information based on the one or more video frames ([0103]-[0104], 620 of Fig 6);
modifying the event based on the context information to generate an alarm ([0105], 625 of Fig 6); and
transmitting a notification identifying the alarm to a monitoring device ([0105], 625 of Fig 6).
Regarding Claims 2 and 10, Stolbikov discloses the event is associated with a risk value (first data set) and the context information is a dynamic value (second data set), and modifying the event comprises: generating a threat value by adding the dynamic value to the risk value ([0043] first and second sets of data sources may be aggregated after the user risk probability exceeds the predetermined threshold in order to determine whether the user is actually in danger, [0063] processor 205 may aggregate various types of data, including the first data and the second data, and makes the determination considering the aggregated data) or subtracting the dynamic value from the risk value; determining that the threat value is greater than a predefined threshold ([0063] second data may be used to confirm or refute the initial determination of the user being at risk (represented by the first probability exceeding the first threshold). Moreover, the processor 205 may initiate a first response when the first probability exceeds the first threshold and may then escalate or deescalate the first response based on the second data); and generating the alarm based on the threat value being greater than the predefined threshold ([0105], 625 of Fig 6).
Regarding Claims 4 and 12, Stolbikov discloses determining the event based on the sensor information comprises determining, based on a machine learning model and the sensor information, the event based on the sensor information ([0061] In some embodiments, the processor 205 uses one or more computational models to calculate the first probability that the user is at risk. For example, the first data is input into a computation model and the probability that the user is at risk (e.g., the first probability) is updated as the first data is input. Here, the computational model is continually updated with new data; [0066] processor 205 uses the same computational model when calculating the second probability as used to calculate the first probability…second data are input into the computational model (and the first data updated as applicable) in order to determine the second probability).
Regarding Claims 5 and 13, Stolbikov discloses determining context information comprises determining, based on a machine learning model and the one or more video frames, the context information ([0061], [0066]).
Regarding Claims 6 and 14, Stolbikov discloses determining context information comprises at least one of: identifying one or more persons within the one or more video frames; identifying one or more attributes of one or more person within the one or more video frames; identifying an activity being performed within the one or more video frames ([0065] analyzing the image data for an indication of a conflict, an indication of an injury, and/or an indication of damage); identifying an object within the one or more video frames; identifying a number of objects within the one or more video frames; or identifying an environmental condition of a location within the one or more video frames.
Regarding Claims 8 and 16, Stolbikov discloses the one or more sensors include occupancy sensors, environmental sensors, door sensors, entry sensors, exit sensors, people counting sensors, temperature sensors ([0040]), liquid sensors, motion sensors, light sensors ([0040]), carbon monoxide sensors, smoke sensors, gas sensors, location sensors ([0040]), and/or pulse sensors ([0040]).
Regarding Claim 9, Stolbikov discloses a system (Abstract) comprising:
one or more video capture devices ([0014] second data includes…video data; [0041] data sources in the second set include, but are not limited to, microphones and other audio sensors, cameras and other image sensors, Fig 2)
one or more sensors ([0040], Fig 2); and
a monitoring platform comprising:
a memory ([0004], Fig 2); and
at least one processor ([0004]) coupled to the memory and configured to:
receive sensor information from the one or more sensors, the sensor information indicating activity within a controlled environment ([0040] first set of data sources include sensors that are constantly providing data to an electronic device. Examples of data sources in the first set include…location sensors (e.g., using satellite positioning system and/or inertial measurement), motion sensors (e.g., accelerometers and/or gyroscopes), and biometric sensors (e.g., body temperature sensor, heart rate sensor, etc.), [0100], 605 of Fig 6);
determine an event based on the sensor information ([0101], 610 of Fig 6);
receive one or more video frames from the one or more video capture devices ([0102], [0104], 615 of Fig 6);
determine context information based on the one or more video frames ([0103]-[0104], 620 of Fig 6);
modify the event by the context information to generate an alarm ([0105], 625 of Fig 6); and
transmit a notification identifying the alarm to a monitoring device ([0105], 625).
Regarding Claim 11, Stolbikov discloses the event is a risk value (first data set), the context information is a dynamic value (second data set), and to modify the event, the at least one processor is configured to: generate a threat value by adding the dynamic value to the risk value or subtracting the dynamic value from the risk value ([0043] first and second sets of data sources may be aggregated after the user risk probability exceeds the predetermined threshold in order to determine whether the user is actually in danger, [0063] processor 205 may aggregate various types of data, including the first data and the second data, and makes the determination considering the aggregated data); determine that the threat value is less than a predefined threshold ([0063] second data may be used to confirm or refute the initial determination of the user being at risk…the processor 205 may…deescalate the first response based on the second data); and clear the event based on the threat value being less than the predefined threshold ([0063] refute the initial determination).
Regarding Claim 17, Stolbikov discloses a non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method ([0013]) comprising:
receiving sensor information captured by one or more sensors, the sensor information indicating activity within a controlled environment ([0040] first set of data sources include sensors that are constantly providing data to an electronic device. Examples of data sources in the first set include…location sensors (e.g., using satellite positioning system and/or inertial measurement), motion sensors (e.g., accelerometers and/or gyroscopes), and biometric sensors (e.g., body temperature sensor, heart rate sensor, etc.), [0100], 605 of Fig 6);
determining an event based on the sensor information ([0101], 610 of Fig 6);
receiving one or more video frames from one or more video capture devices ([0102], [0104], 615 of Fig 6);
determining context information based on the one or more video frames ([0103]-[0104], 620 of Fig 6);
modifying the event based on the context information to generate an alarm ([0105], 625 of Fig 6); and
transmitting a notification identifying the alarm to a monitoring device ([0105], 625 of Fig 6).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Stolbikov.
Regarding Claim 3, Stolbikov discloses the event is associated with a risk value (first data set) and the context information is a dynamic value (second data set), and modifying the event comprises: generating a threat value by adding the dynamic value to the risk value([0043], [0063]) or subtracting the dynamic value from the risk value; determining that the threat value is greater than a predefined threshold ([0063]); and generating the alarm based on the threat value being less than the predefined threshold ([0105], 625 of Fig 6), but doesn’t teach less than the threshold.
There was a finite number of known ways to determine the data represents a risk; either it exceeds a threshold, equals a threshold or falls below a threshold (for instance, every bit of concerning data subtracting from a safe score until falling below a threshold).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to determine that the threat value is less than a predefined threshold, choosing from a finite number of identified, predictable solutions of how to effectively determine risk from gathered sensor data, with a reasonable expectation of success.
Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Stolbikov in view of Nathan et al. (Nathan; US 20200137354 A1).
Regarding Claims 7 and 15, Stolbikov doesn’t specify determining the context information comprises determining an operational status of the one or more video capture devices.
In the same field of endeavor, Nathan discloses a network of connected cameras and scalable cloud computing. The network may segregate a retrieval server from a storage server, and by doing so, minimize the load on any one server.
Nathan discloses determining an operational status of the one or more video capture devices ([0172] camera (100) sends “heartbeat” camera status messages to the system).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Stolbikov with Nathan using camera status in order to provide scalable solutions for high-throughput camera provisioning and event recognition and improving network efficiency, as suggested by Nathan ([0008]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Trundle et al. (US 20210004910 A1) discloses a method for monitoring a property, by obtaining sensor data from sensors at a property; determining, for a peril, a risk that the peril will occur at the property based on risk factors determined from the sensor data; selecting a particular risk factor from the risk factors based on the risk that the peril will occur at the property; and providing an indication of the particular risk.
b. Piccolo, III (US 20170084160 A1) discloses a method for minimizing or preventing false alarms. Override panels are used such as locally near or in the protected space or remotely at a security desk, to deactivate or block the generation of a fire alarm signal in the case where the occupants or a management personnel recognizes that the fire alarm signal should not be generated. In this way, an alarm verification step is included. In another aspect, additional, contextual information is used to characterize or adjust when fire alarm signals are generated. This contextual information can be generated from sources that are not typically used in the generation of the fire alarm signal but instead are based on other sources of the information concerning the protected space.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK S RUSHING whose telephone number is (571)270-5876. The examiner can normally be reached on 10-6pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Davetta Goins can be reached at 571-272-2957. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARK S RUSHING/Primary Examiner, Art Unit 2689