DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 3, 2025 has been entered.
Response to Arguments
Applicant’s arguments, with respect to rejection under 35 U.S.C. 103 of claims 1-30 have been fully considered but they are not persuasive.
Applicant argues that the cited references fail to teach, disclose or suggest all limitations of claims 1, 8, 15 and 23. In particular, Applicant asserts that the cited references fail to disclose determine that the person is performing a suspicious activity based on a change in the attribute information of the second object in the video, wherein the change in the attribute information of the second object is based on a determination that the second object approaches the first object;
output a first alert via the at least one image sensor based on determining
output a notification information which is different from the first alert based on determinin
In response, the Examiner respectfully disagrees.
Aggarwal discloses a system for detecting suspicious activities based on changes in object/person attribute information such as direction, displacement, speed, path length, and region entry. The system generates an alert when a computer confidence weight exceeds a predefined threshold (Aggarwal, ¶¶ [0014], [0031]-[0038]).
Ficher discloses determining an alarm condition based on an object/person approaching a monitored target. Ficher discloses outputting alerts, including audible and visual, as the object/person enters a predefined proximity zones (Ficher, ¶¶ [0010]-[0017] and [0031]-[0038]).
Okatani discloses a system for detecting when an object that was associated with a person becomes separated from the person and issuing event specific notification including alarm screen or messages to inform the user the existence of the left object. Arbitrary methods can be implemented without being limited to the one as described by operating the alarm device (for example, lamp, speaker), using a not shown projection device for projection, using a not shown printing device for printing, calling the prescribed phone, or whatever (Okatani, Abstract and ¶¶ [0006], [0076], [0123] and FIG. 5).
Accordingly, the combination of Aggarwal, Ficher and Okatani teach determining suspicious activity based on approach related attribute changes, outputting a first alert with audio and visual modalities and outputting notification information different from the first alert upon separation after association.
Applicant argues that Ficher teaches away from the claimed combination because Ficher seeks to avoid false alarms from inanimate objects, whereas Okatani detects left objects.
In response, the Examiner respectfully disagrees. Teaching away requires that a reference criticizes, discredits, or otherwise discourages the solution claimed. Ficher’s discussion of thresholding and filtering is directed to reducing nuisance alarms caused by irrelevant environmental motion, for example, inanimate objects such as plants, paper and the like are blown or moved into the active zone owing, say, to a strong wind, or if an animal stays within the active zone for a relatively long time, not to discouraging monitoring of inanimate objects (Ficher, ¶¶ [0010]-[0017]). Okatani left object detection focus on monitoring relevant assets after association. These teachings are complementary and yield the predictable result of monitoring meaningful objects while suppressing noise.
Applicant acknowledges that Okatani discloses an alarm screen but argues this does not constitute notification information different from the first alert.
In response, the Examiner respectfully disagrees. Okatani discloses issuing information specific to a left object event via alarm screens or message after separation is detected (Okatani, ¶¶ [0050], [0092] and [0123]). Ficher’s approach triggered alert and Okatani’s separation triggered notification are different user facing outputs corresponding different detected events.
Applicant also argues that the Examiner impermissibly switches the identity of the "first object" between references and that the combination creates a paradox where the system would alarm when an owner returns to the object.
In response, the Examiner respectfully disagrees.
Regarding object identity, Aggarwal discloses detecting and tracking generic objects using low-level feature sets and association graphs (Aggarwal, ¶¶[0016]-[0020]) Ficher and Okitani are relied upon for their functional teaching, approach based alerting and separation based notification, not for limiting the object to a specific physical embodiment. , Under the broadest reasonable interpretation, the “first object” may be ant traced object.
Regarding the alleged paradox, Claim 1 does not require distinguishing an owner from an intruder, nor does it require suppressing alerts for an owner. Additionally, Claim 1 requires determining that the first object is not associated with the second object prior determining suspicious activity. Thus, a scenario in which an associated owner returns to the object is not commensurate with the claim as drafted.
Applicant argues that dependent claims are allowable for their dependency on allegedly allowable independent claim. For the reasons discussed above, this is not persuasive.
For the above reasons, Applicant’s arguments do not overcome the rejections and the claims remain unpatentable under 35 U.S.C §103.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4-9, 11-16, 18-21, 23-24 and 26-29 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal et al. (US20050281435A1), hereinafter referred to as Aggarwal, in view of Fischer et al. (US20060250230A1), hereinafter referred to as Fischer and Okatani et al. (US20220148322A1) hereinafter referred to as Okatani.
Regarding claim 1, Aggarwal discloses monitoring system comprising:
at least one memory storing instructions (See Aggarwal, FIG. 3 and ¶¶[0040] and [0041] );
at least one image sensor acquiring a video (See Aggarwal, FIG. 3 and ¶[0011] ); and
at least one processor configured to execute the instructions to (See Aggarwal, FIG. 3 and ¶¶[0040] and [0041]):
detect a first object and attribute information of the first object in the video acquired by the at least one image sensor (See Aggarwal, ¶¶[0012], [0013], [0016], [0023]-[0028] and [0042] disclosing low-level feature sets (representing people, objects, etc.) and attributes such as direction, displacement and speed );
monitor the first object in the video (See Aggarwal, ¶¶[0012]- [0013]);
detect a second object in the video (See Aggarwal, ¶¶[0012], [0013], [0016] and [0023]-[0028]);
detect attribute information of the second object in the video (See Aggarwal, ¶¶[0012], [0013], [0016] and [0023]-[0028]);
determine that the second object is a person based on the attribute information of the second object (See Aggarwal, ¶¶ [0016] and [0042]);
determine that the first object is not associated with the second object (See Aggarwal, ¶¶ [0017] -[0020] disclosing representation of the object as graph nodes with arcs indicating association confidence wherein the absence of an arc or failure to satisfy association criteria indicates the object are not associated);
determine that the person is performing a suspicious activity based on a change in the attribute information of the second object in the video (See Aggarwal, ¶¶ [0014], [0031]-[0038] disclosing that the system evaluates changing attributes ( e.g. direction, displacement and speed) against a predefined criteria. A confidence weight C is computed, if it exceeds a threshold, the person is flagged as suspicious);
output a first alert via the at least one image sensor based on determining that the person is performing the suspicious activity (See Aggarwal, Fig. 1 and ¶¶ [0015] and [0016]).
Aggarwal does not explicitly disclose wherein the change in the attribute information of the second object is based on a determination that the second object approaches the first object; wherein the first alert includes an audible modality and a visual modality and output a notification information which is different from the first alert based on determining the first object is separated from the second object after the first object was held by the second object.
However, Fischer from the same or similar endeavor of security system discloses wherein the change in the attribute information of the second object is based on a determination that the second object approaches the first object (See Fischer, ¶¶ [0010]-[0016] and [0031]-[0038]) and
wherein the first alert includes an audible modality and a visual modality (See Fischer ¶¶ [0014] and [0016]).
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Aggarwal to add the teachings of Fischer as above, in order to immediately inform a user about the escalation of a situation and avoid unnecessary false alarms by setting threshold values of the signal intensity which is measured by the sensor apparatus, was activation of the reaction apparatus being prevented when signal intensity drops below threshold values (See Fischer ¶¶ [0016] and [0017]).
Furthermore, Okatani from the same or similar endeavor of monitoring system discloses output a notification information which is different from the first alert based on determining the first object is separated from the second object after the first object was held by the second object (Okatani, Abstract and ¶¶ [0006], [0076], [0123]).
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Aggarwal and Fischer to add the teachings of Okatani as above, in order to reduce the burden on the monitoring staff and ensure accurate detection of the left objects (See Okatani ¶¶ [0002] and [0005]).
Regarding claim 2, Aggarwal, Fischer and Okatani disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim.
Furthermore, Aggarwal discloses the monitoring system according to claim 1, wherein determining that a person is performing a suspicious activity includes determining a degree of suspicion, wherein based on determining that the degree of suspicion is above a threshold (See Fischer ¶¶ [0025] and [0032]-[0034]).
the first alert is output (See Aggarwal, Fig. 1 and ¶¶ [0015] and [0016]).
Aggarwal does not explicitly disclose a second alert is output to a mobile device.
However, Fischer from the same or similar endeavor of security system discloses a second alert is output to a mobile device (See Fischer ¶¶ [0016] and [0017] -audible, visual and/or haptic information, in particular via the vehicle key).
The motivation for combining Aggarwal, Fischer and Okatani has been discussed in connection with claim 1, above.
Regarding claim 4, Aggarwal, Fischer and Okatani disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim.
Aggarwal does not explicitly disclose the monitoring system according to claim 1, wherein the second object is determined to approach the first object based on a positional relationship.
However, Aggarwal from the same or similar endeavor of monitoring system discloses the monitoring system according to claim 1, wherein the second object is determined to approach the first object based on a positional relationship (See Aggarwal, ¶¶ [0014]-[0016] and [0031]-[0038], which renders obvious determining that the second object approaches the first object based on a positional relationship)
The motivation for combining Aggarwal, Fischer and Okatani has been discussed in connection with claim 1, above.
Regarding claim 5, Aggarwal, Fischer and Okatani disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim.
Furthermore, Aggarwal discloses the monitoring system according to claim 1, wherein the at least one processor is configured to execute the instructions to: associate a plurality of objects in a plurality of images included in the video as a same object (See Aggarwal, and ¶¶ [0016] - [0017]).
Regarding claim 6, Aggarwal, Fischer and Okatani disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim.
Furthermore, Aggarwal discloses the monitoring system according to claim 1, wherein the at least one processor is configured to execute the instructions to: associate a first label with the first object; and associate a second label with the second object (See Aggarwal, and ¶¶ [0017] and [0018]).
Regarding claim 7, Aggarwal, Fischer and Okatani disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim.
Furthermore, Aggarwal discloses the monitoring system according to claim 1,
wherein the detection of the first object comprises detecting a position of the first object, and
wherein the detection of the second object comprises detecting a position of the second object
wherein the change in the attribute information of the second object is determined based on a change in the position of the second object that is closer to the position of the first object (See Aggarwal, and ¶¶ [0027] and [0030]-[0034]).
Regarding claims 8, 9 and 11-14, these claims are rejected based on the same art and evidentiary limitations applied to the system of claims 1, 2 and 4-7, since they claim analogous subject matter in the form of a method for performing the same or equivalent functionality.
Regarding claim 15, Aggarwal discloses monitoring system comprising:
at least one memory storing instructions (See Aggarwal, FIG. 3 and ¶¶[0040] and [0041] );
at least one image sensor acquiring a video (See Aggarwal, FIG. 3 and ¶[0011] ); and
at least one processor configured to execute the instructions to (See Aggarwal, FIG. 3 and ¶¶[0040] and [0041]):
detect a person and an object in the video (See Aggarwal, ¶¶[0012], [0013], [0016], [0023]-[0028] and [0042] disclosing low-level feature sets (representing people, objects, etc.));
output an alert based on the determination of the suspicious person (See Aggarwal, Fig. 1 and ¶¶ [0015] and [0016]).
Examiner notes that Aggarwal also discloses determine that the person is the suspicious person based on the displacements, path length and region entry (See Aggarwal, FIG. 3 and ¶¶[0031] - [0034]);
Aggarwal does not explicitly disclose based on determining that the detected person approaches the detected object; determine that detected person is a suspicious person ; and output a notification information which is different from the alert based on determining that the detected object is separated from the detected person after the detected object was held by the detected person;
However, Fischer from the same or similar endeavor of security system discloses based on determining that the detected person approaches the detected object; determine that detected person is a suspicious person (See Fischer ¶¶ [0010] -[0017]); and
wherein the first alert includes an audible modality and a visual modality (See Fischer ¶¶ [0014] and [0016]).
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Aggarwal to add the teachings of Fischer as above, in order to immediately inform a user about the escalation of a situation and avoid unnecessary false alarms by setting threshold values of the signal intensity which is measured by the sensor apparatus, was activation of the reaction apparatus being prevented when signal intensity drops below threshold values (See Fischer ¶¶ [0016] and [0017]).
Furthermore, Okatani from the same or similar endeavor of monitoring system discloses output a notification information which is different from the alert based on determining that the detected object is separated from the detected person after the detected object was held by the detected person (Okatani, Abstract and ¶¶ [0006], [0076], [0123]).
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Aggarwal and Fischer to add the teachings of Okatani as above, in order to reduce the burden on the monitoring staff and ensure accurate detection of the left objects (See Okatani ¶¶ [0002] and [0005]).
Okatani and Fischer discloses record if the detected person approaches the detect object (See Okatani, ¶¶[0024], [0032] and [0034]; and Fischer ¶¶ [0010] -[0017]));
Regarding claim 16, this claim is rejected based on the same art and evidentiary limitations applied to claims 1 and 2, since it claims analogous subject matter for performing the same or equivalent functionality.
Regarding claims 8, 9 and 11-14, these claims are rejected based on the same art and evidentiary limitations applied to the system of claims 1, 2 and 4-7, since they claim analogous subject matter in the form of a method for performing the same or equivalent functionality.
Regarding claim 18, Aggarwal, Fischer and Okatani disclose all the limitations of claim 15, and is analyzed as previously discussed with respect to that claim.
Aggarwal does not explicitly disclose the monitoring system according to claim 15, wherein the detected person is determined to approach the detected object based on a positional relationship.
However, Aggarwal from the same or similar endeavor of monitoring system discloses he monitoring system according to claim 15, wherein the detected person is determined to approach the detected object based on a positional relationship (See Aggarwal, ¶¶ [0014]-[0016] and [0031]-[0038])
The motivation for combining Aggarwal, Fischer and Okatani has been discussed in connection with claim 15, above.
Regarding claim 19, Aggarwal, Fischer and Okatani disclose all the limitations of claim 15, and is analyzed as previously discussed with respect to that claim.
Furthermore, Aggarwal discloses the monitoring system according to claim 15, wherein the at least one processor is configured to execute the instructions to: associate a plurality of images included in the video as a same person (See Aggarwal, and ¶¶ [0016] - [0017]).
Regarding claim 20, Aggarwal, Fischer and Okatani disclose all the limitations of claim 15, and is analyzed as previously discussed with respect to that claim.
Furthermore, Aggarwal discloses the monitoring system according to claim 15, wherein the at least one processor is configured to execute the instructions to: associate a first label with the detected person; associate a second label with the detected object; and associate the second label with the first label. (See Aggarwal, and ¶¶ [0017] - [0020]).
Regarding claim 21, Aggarwal, Fischer and Okatani disclose all the limitations of claim 15, and is analyzed as previously discussed with respect to that claim.
Furthermore, Aggarwal discloses the monitoring system according to claim 15, wherein the detection of the person comprises detecting a position of the person, and wherein the detection of the object comprises detecting a position of the object. (See Aggarwal, and ¶¶ [0031] - [0034]).
Regarding claims 23, 24 and 26-29, these claims are rejected based on the same art and evidentiary limitations applied to the system of claims 15, 16 and 18-21, since they claim analogous subject matter in the form of a method for performing the same or equivalent functionality.
Claims 3, 10, 17 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal, in view of Fischer, Okatani. and Official Notice of routine practice.
Regarding claim 3, Aggarwal, Fischer and Okatani disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim.
Furthermore, Aggarwal discloses a terminal, the terminal being different from the at least one image sensor (See Aggarwal, ¶ [0040])
Aggarwal does not explicitly disclose the monitoring system according to claim 1, wherein the at least one processor is configured to execute the instructions to: output an image of the second object in a case where the second alert is output, wherein the image is displayed on a terminal, and wherein the second alert is different from the first alert.
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to output an image of the second object in a case where the second alert is output, and display the image on the display disclosed by Aggarwal because it is well-known in the art that, when a suspicious activity is detected or an alert is generate in a surveillance system, the corresponding image is displayed to an operator as evidenced by US 20170193279 A1, ¶ [0042]; US 20160342846 A1, ¶ [0041]; US 20180150683 A1, ¶ [0063]; US 20170109891 A1, [0038] and US 20140347486 A1, [0037] . Therefore, displaying/outputting such imagery is merely a partible use of an old and well-known expedient in the art.
Furthermore, Fischer from the same or similar endeavor of security system discloses the wherein the second alert is different from the first alert (See Fischer ¶¶ [0016] and [0017]).
The motivation for combining Aggarwal, Fischer and Okatani has been discussed in connection with claim 1, above.
Examiner notes that Okatani also discloses a system that generates an alarm screen for displaying information about a left object. For example, but not limited to, an image of the owner of the left object (Okatani, ¶¶ [0050], [0092], [0116] and [0120]).
The motivation for combining Aggarwal, Fischer and Okatani has been discussed in connection with claim 1, above.
Regarding claim 10, this claim is rejected based on the same art and evidentiary limitations applied to the system of claim 3, since it claims analogous subject matter in the form of a method for performing the same or equivalent functionality.
Regarding claim 17, this claim is rejected based on the same art and evidentiary limitations applied to the system claim 3, since it claims analogous subject matter for performing the same or equivalent functionality.
Furthermore, Okatani from the same or similar endeavor of monitoring system discloses output an image of the person based on determining that the detected object separated from the detected person after the detected object was held by the detected person in
(Okatani, ¶¶ [0050], [0092], [0116] and [0120]).
The motivation for combining Aggarwal, Fischer and Okatani has been discussed in connection with claim 1, above.
Regarding claim 25, this claim is rejected based on the same art and evidentiary limitations applied to the system claim 3, since it claims analogous subject matter for performing the same or equivalent functionality.
Claim 22 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal, in view of Fischer, Okatani and further, in view of Sriram (US 20200151489 A1), hereinafter referred to as Sriram.
Regarding claim 22, Aggarwal, Fischer and Okatani disclose all the limitations of claim 15, and is analyzed as previously discussed with respect to that claim.
Aggarwal does not explicitly disclose the monitoring system according to claim 15, wherein the object is determined to be a package based on an attribute of the object.
However, Sriram from the same or similar endeavor of imaging system discloses the monitoring system according to claim 15, wherein the object is determined to be a package based on an attribute of the object (See Sriram ¶ [0031]).
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Aggarwal and Fischer to add the teachings of Sriram as above, in order to provide an object detector to use a computer vision algorithm, an object detection algorithm, and/or a machine learning model(s) to detect objects and/or persons represented by the sensor data 102 (e.g., depicted in images represented by the sensor data). For example, the object detector 106 may be used—and correspondingly trained or programmed—to generate bounding shapes corresponding to objects (e.g., bags, packages, backpacks, luggage, items, etc.) and persons (e.g., people, adults, kids, animals, etc.) (Sriram, [0031]).
Regarding claim 30, this claim is rejected based on the same art and evidentiary limitations applied to the system claim 22, since it claims analogous subject matter for performing the same or equivalent functionality.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FABIO LIMA whose telephone number is (571)270-0625. The examiner can normally be reached on Monday through Friday, 8:30 AM - 5:00 PM (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached on (571) 272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FABIO S LIMA/Examiner, Art Unit 2486