DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
A Preliminary Amendment was entered March 22, 2024 to amend the abstract, specification, drawings and claims. Claims 3-4, 6-8, 12-13 and add claims 14-17 from pending claims 1-17.
Election/Restrictions
Applicant’s election without traverse of Claims 1-8 and 12-13 (Group 1) in the reply filed on March 2, 2026 is acknowledged.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on March 21 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is considered by examiner.
Claim Objections
Claim 12 is objected to because of the following informalities: Claim 12 is objected to for missing the colon after the word “comprising” of the preamble to separate the preamble from the body of the claim. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because data per se and/or computer programs do not fall into one of the four categories of statutory invention (machine, process, manufacture, composition). More specifically, claims are eligible for patent protection under § 101 if they are in one of the four statutory categories and not directed to a judicial exception to patentability (i.e., laws of nature, natural phenomena, and abstract ideas). Alice Corp. v. CLS Bank Int'l, 573 U. S. 208 (2014)
Regarding Claim 13, the claim is drawn towards a “computer program”. As described in MPEP § 2106, data per se and computer programs do not fall into one of the four statutory categories. Therefore, since claim 13 is drawn towards a computer program, the claim is not eligible for patent protection.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 6, 7, 8 each recite the term “substantially” and is a relative term which renders the claim indefinite. The term “substantially” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The applicant's specification does not provide a standard to measure "substantially" (see pg 5 ln 4-15; pg 10 ln 18-22). Thus, the applicant has failed to define the limitation with sufficient metes and bounds to establish definiteness. see MPEP §2173.05(b). Therefore, claims 6, 7, 8 are each rejected as indefinite. The examiner suggests to amend each claim to remove the relative measurement “substantially” from the claim language.
Claim 8 recites the limitation “said third camera” in the limitation “further comprising positioning said third camera to capture a further substantially” where the “third camera” is first introduced in claim 4 and claim 8 is dependent on claim 1. There is insufficient antecedent basis for this limitation in the claim. Thus, Applicant has failed to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. The examiner suggests to amend the claim for proper dependency for the limitation of “said third camera.”
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-5, 12, 13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hallett et al (US 2020/0143561).
Regarding Claim 1, Hallett et al teach a method of operating a space monitoring apparatus (process 200 of monitoring an environment 101 using an entity-tracking system 100; Fig 1, 2 and ¶ [0019], [0025]) in networked communication with a set of cameras (system 100 includes first camera 102 and second camera 103 connected with server 180; Fig 1 and ¶ [0019]-[0020]) including at least a first camera positioned to capture position data and image data suitable for individuation of a subject (first camera 102 with field of view for monitored environment 101 to identify and track individual entities (subjects), such as first entity 110; Fig 1 and ¶ [0019]) and a second camera positioned to capture position data of a subject (second camera 103 have associated field of view for monitored environment 101 to identify and track individual entities (subjects), such as first entity 110; Fig 1 and ¶ [0019]), said first and said second cameras sharing at least a portion of a field of view (the first camera 102 and second camera 103 acquire images for an associated field of view for the monitored environment 101; Fig 1 and ¶ [0019]), comprising:
extracting first individuating data and first position data of a first subject from an image captured by said first camera (the first camera 102 acquires a set of images of the monitored environment 101, including of entity 110, with analysis including use of a spatial coordinate system to determine positional data of the entity; Fig 1, 2 and ¶ [0019]-[0020], [0026], [0033]);
extracting second position data of a second subject from an image captured by said second camera (the second camera 103 acquires a set of images of the monitored environment 101, including of entity 110, with analysis including use of a spatial coordinate system to determine positional data of the entity; Fig 1, 2 and ¶ [0019]-[0020], [0026], [0033]);
matching said first and said second position data (images of the entity 110, acquired by the cameras 102, 103 with same positional data from spatial coordinate system, are matched for single object tracking, with matching of the multiple images (SIFT matches coordinates of objects between two images); Fig 1, 2 and ¶ [0019]-[0020], [0026]-[0028], [0034], [0081]);
responsive to a positive match from said position matcher indicating that said first and said second subject are an identical entity, creating a tracking reference tagged with said first individuating data (the image data from the multiple fields of view matched is used for entity tracking by tracking the motion of the entity (such as via tracking bounding box motion with bounding box tagged to detected entity); Fig 1, 2 and ¶ [0030], [0034]-[0035]);
storing said tracking reference tagged with said first individuating data in a data store for reuse (the images and associated measurement data (positioning, tracking) are stored on the server 180 in a database and may be retrieved at a later time; Fig 1, 2 and ¶ [0023], [0026]); and
signalling said tracking reference to a tracking logic component to coordinate continuity of tracking of said identical entity with said set of cameras (the entity-tracking system may include a single shot multibox detection (SSD) neural network, which SSD neural network uses the visual feature positions identified in the multiple camera 102, 103 images to perform tracking of the entity 110 concurrently during imaging, thereby providing real-time tracking of the entity; Fig 1, 2 and ¶ [0024]-[0027], [0030]-[0031]).
Regarding Claim 2, Hallett et al teach the method of claim 1 (as described above), said extracting first individuating data further comprising operating a machine-learning model to determine characteristics of a subject in an image (features of the first entity 110 are extracted from the first camera 102 images and classified using a neural network system; Fig 1, 2 and ¶ [0024], [0031], [0036]-[0037]).
Regarding Claim 3, Hallett et al teach the method of claim 1 (as described above), further comprising: extracting second individuating data from a further image (features of the first entity 110 are extracted from the second camera 103 images; Fig 1, 2 and ¶ [0019], [0031], [0036]-[0037]); querying said data store for stored said first individuating data matching said second individuating data (accessing the database data for the first camera 102 images and associated measurement data (positioning, tracking) stored on the server 180 and using the data for entity 110 feature matching to the second camera 103 images using SIFT techniques; Fig 1, 2, 3 and ¶ [0023]-[0028], [0031], [0036]-[0037], [0075]); and in response to a positive match from said querying (matching performed between image data from cameras 102, 103; ¶ [0075]), indicating that said first and said second subject are an identical entity, reusing said tracking reference (the matching data may be stored and used again to match an entity detected from the current visit to a previously-detected entity visit; ¶ [0062], [0075]).
Regarding Claim 4, Hallett et al teach the method of claim 1 (as described above), said signalling comprising signalling said tracking reference to a modelling logic component associated with third camera in said set of cameras to coordinate tracking of said identical entity in images captured by said third camera (a third camera can be used to capture additional image data (“PEO3” ¶ [0089]) to additionally track the entity 110, including third location and positional data for matching and entity tracking via SIFT (¶ [0027]), with the entity attributes and positional location for tracking based on third measurements and confidence values; Fig 1, 2, 3 and ¶ [0043], [0087]-[0090]).
Regarding Claim 5, Hallett et al teach the method of claim 4 (as described above), said coordinating tracking comprising replacing a new tracking reference generated for an image captured by said third camera with said tracking reference signalled to said modelling logic component (a reference position is used for spatial coordinates for the monitored environment (¶ [0020]) and a reference model associated with the given entity will update the given numerical attributes from the first camera to the second camera (or given third camera by matching PEO1 to PEO3 for same entity (Entity 1 ¶ [0095]) and the record (position) is updated for the entity-tracking system; Fig 1, 2, 3 and ¶ [0082], [0087]).
Regarding Claim 12, Hallett et al teach a space monitoring apparatus (entity-tracking system 100; Fig 1, 2 and ¶ [0019], [0025]) operable in networked communication with a set of cameras (system 100 includes first camera 102 and second camera 103 connected with server 180; Fig 1 and ¶ [0019]-[0020]) including at least a first camera positioned to capture position data and image data suitable for individuation of a subject (first camera 102 with field of view for monitored environment 101 to identify and track individual entities (subjects), such as first entity 110; Fig 1 and ¶ [0019]) and a second camera positioned to capture position data of a subject (second camera 103 have associated field of view for monitored environment 101 to identify and track individual entities (subjects), such as first entity 110; Fig 1 and ¶ [0019]), said first and said second cameras sharing at least a portion of a field of view (the first camera 102 and second camera 103 acquire images for an associated field of view for the monitored environment 101; Fig 1 and ¶ [0019]), comprising
electronic logic circuitry (server 180 performs the operations for the entity-tracking system 100; Fig 1, 2 and ¶ [0020], [0023]-[0026]) operable to perform the method according to claim 1 (as described above).
Regarding Claim 13, Hallett et al teach a computer program comprising computer program code (the entity-tracking system 100 include instructions of computer code; Fig 1, 2 and ¶ [0025]) to, when loaded into a computer system and executed thereon (the computer code instructions may be tangible to be loaded on machine-readable medium and executed by processor; Fig 1, 2 and ¶ [0025]), cause the computer system to perform all the steps of the method according to claim 1 (as described above).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Hallett et al (US 2020/0143561) in view of Sriram et al (US 2019/0294889).
Regarding Claim 6, Hallett et al teach the method of claim 1 (as described above).
Hallett et al does not teach positioning said first camera to capture a substantially horizontal view field.
Sriram et al is analogous art pertinent to the technological problem addressed in the current application and teach positioning said first camera to capture a substantially horizontal view field (an image sensor camera 240 is used to monitor the area of interest 252A of area 200 (entrance/exit) from a first perspective 164B; Fig 2, 3B and ¶ [0061]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Hallett et al with Sriram et al including positioning said first camera to capture a substantially horizontal view field. By capturing a field of view with multiples cameras from different perspectives and analyzing the environment using machine learning techniques, detailed information and behavior is detected and tracked, thereby allowing for enhanced security and convenience is achieved, as recognized by Sriram et al (¶ [0006]-[0008]).
Regarding Claim 7, Hallett et al teach the method of claim 1 (as described above).
Hallett et al does not teach positioning said second camera to capture a substantially horizontal view field.
Sriram et al is analogous art pertinent to the technological problem addressed in the current application and teach positioning said second camera to capture a substantially horizontal view field (an image sensor camera 246 is used to monitor the area of interest 252B of area 200 (entrance/exit) from a second perspective 164C; Fig 2, 3B and ¶ [0061]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Hallett et al with Sriram et al including positioning said second camera to capture a substantially horizontal view field. By capturing a field of view with multiples cameras from different perspectives and analyzing the environment using machine learning techniques, detailed information and behavior is detected and tracked, thereby allowing for enhanced security and convenience is achieved, as recognized by Sriram et al (¶ [0006]-[0008]).
Regarding Claim 8, Hallett et al teach the method of claim 1 (as described above).
Hallett et al does not teach positioning said third camera to capture a substantially vertical view field.
Sriram et al is analogous art pertinent to the technological problem addressed in the current application and teach positioning said third camera to capture a substantially vertical view field (a fisheye lens camera 236 may be installed to monitor from above to capture 360 degree field of view (vertical), corresponding to the same corresponding field of view of the first camera 240 and second camera 246; Fig 2, 3B and ¶ [0061]-[0063]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Hallett et al with Sriram et al including positioning said third camera to capture a substantially vertical view field. By capturing a field of view with multiples cameras from different perspectives and analyzing the environment using machine learning techniques, detailed information and behavior is detected and tracked, thereby allowing for enhanced security and convenience is achieved, as recognized by Sriram et al (¶ [0006]-[0008]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
El-Khamy et al (US 2018/0089505) teach a method and apparatus for detecting an object in image data including use of multiple object detectors for generating the image data and use of multiple neural network models for detecting, classifying and tracking of an object.
Fisher et al (US 2019/0156274) teach a system and method for machine learning subject tracking including use of a plurality of cameras with overlapping fields of view to image an environment and used to identify and track a subject.
Ma et al (US 2016/0217417) teach a system and method for detecting an object, including motion of the object and the direction of motion, which may be used to track the object over time.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN M BROUGHTON whose telephone number is (571)270-7380. The examiner can normally be reached Monday-Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATHLEEN M BROUGHTON/Primary Examiner, Art Unit 2661