12
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This communication is a non-Final office action in merits. Claims 1-20, as originally filed, are presently pending and have been elected and considered below.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 3/21/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claims 1 and 12 are objected to because of the following informalities: Claim 1 recites “estimate attributes of persons detected and tracked by the person detection and tracking system by a rich human analysis system to identify attributes in accordance with set criteria using a set of filters of deeper layers of convolutional layers of a feature extractor where the set of filters are divided into N groups trained on N corresponding tasks corresponding to task-specific heads such that one task is assigned to each group of the N groups and that each task loss updates only one subset of filters; and identify one or more people that satisfy the attributes and the set criteria“ in which “N” is suggested to be defined as an integer with value greater than one.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or
nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0237144 Al, Chen et al. (hereinafter Chen) in view of US 2023/0053121 A1, Stevens et al. (hereinafter Stevens) and further in view of US 2022/0248148 Al, Verhulst et al. (hereinafter Verhulst).
As to claim 1, Chen discloses a system for rich human analysis, the system comprising: a memory; and one or more processors in communication with the memory (Fig 1) configured to:
extract images from camera in a surveillance system (Figs 20, 32, 40C, 64; pars 0175, 0351, 0354, extract images or image features from a camera);
feed the images to a person detection and tracking system that deciphers one or more human activity tasks (Figs 14, 43, 66, 68, 82, 0087; pars 0003, 0632, 0713);
estimate attributes of persons detected and tracked by the person detection and tracking system by a rich human analysis system to identify attributes in accordance with set criteria using a set of filters of deeper layers of convolutional layers of a feature extractor (Figs 20-21, 40C, 53; pars 0157, 0160, 0390, 397).
Chen does not expressly disclose the set of filters are divided into N groups trained on N corresponding tasks corresponding to task-specific heads such that one task is assigned to each group of the N groups and that each task loss updates only one subset of filters and identify one or more people that satisfy the attributes and the set criteria.
Stevens, in the same or similar field of endeavor, further teaches estimate attributes of persons detected and tracked by the person detection and tracking system by a rich human analysis system to identify attributes in accordance with set criteria using a set of filters of deeper layers of convolutional layers of a feature extractor (Figs 3-4, a group of filters detecting and tracking customers to identify happy customers (e.g. criteria), Figs 38A, 75, filters corresponding to different attributes; pars 0117-0118, 0124, 0138, 0205, 0424, 0599-0600, 0644) where the set of filters are divided into N groups trained on N corresponding tasks corresponding to task-specific heads such that one task is assigned to each group of the N groups and that each task loss updates only one subset of filters (Figs 24, 33A-33C; pars 0255, 0285, 0327, a subset of filters being used for specific task/model based on relevance criteria); and identify one or more people that satisfy the attributes and the set criteria (Figs 23-24, 33B, 44A, 44C; pars 0008, 0106, 0112, 0117-0118, 0242-0243. Identifying happy customers as set criteria)
Nevertheless, Verhulst additionally teaches the set of filters are divided into N groups trained on N corresponding tasks corresponding to task-specific heads such that one task is assigned to each group of the N groups and that each task loss updates only one subset of filters (Fig 10; pars 0021, 0031, 0035, 0047, 0118, N different filters for specific tasks).
Therefore, consider Chen, Stevens, and Verhulst’s teachings as a whole, it would have been obvious to one of skill in the art before the filing date of invention to incorporate Stevens and Verhulst’s teachings in Chen’s system to provide a parallel modeling system utilizing a group of filters in identifying persons or customers classes based on given criteria.
As to claim 2, Chen as modified discloses the system of claim 1, wherein the feature extractor generates a feature map from the images and task-specific heads output task predictions based on the feature map (Chen: Figs 40C, 53; pars 0175, 0230, 0235, feature vectors being extracted and output).
As to claim 3, Chen as modified discloses the system of claim 1, wherein, during training, each group of the N groups is only updated by its corresponding task gradients (Verhulst: par 0132, conjugate gradient or stochastic gradient updates being utilized).
As to claim 4, Chen as modified discloses the system of claim 1, wherein. during training, each task learns its features without interference from other tasks (Stevens: Figs 21, 23, 33B; pars 0007, 0018-0019, 0032, 0034, task and model being processed in parallel).
As to claim 5, Chen as modified discloses the system of claim 1, wherein the set of filters are divided, during training, by backpropagation (Verhulst: pars 0013, 0031, 0055, 0132).
As to claim 6, Chen as modified discloses the system of claim 1, wherein the one or more human activity tasks include re-identification of a person having a trajectory that passes two or more cameras and attribute identification of the attributes of that person (Figs 40A-C; pars 0390-0395), such that the re-identification of the person and the attribute identification are concurrently performed (Chen: Figs 11, 13, 16, 53, 61, 63, 68; face recognition, pose recognition, and object recognition concurrently performed with cameras 1-3; pars 0176, 0198, 0383, 0503).
As to claim 7, Chen as modified discloses the system of claim 1, wherein the one or more human activity tasks include pose estimation and attribute identification, and the pose estimation and the attribute identification are concurrently performed (Chen: Figs 13, 16, 53, 61, 63; pars 0176, 0198, 0383, 0503).
As to claim 8, Chen as modified discloses the system of claim 7, further comprising an action device responsive to the rich human analysis system wherein the action device adjusts a duration of a stop light in accordance with a pedestrian (Chen: Fig 4; pars 0109, 0111-0112, 0526, 0530, 0552, 0557, traffic light control).
As to claim 9, Chen as modified discloses the system of claim 7, further comprising an action device responsive to the rich human analysis system wherein the action device alerts first responders in accordance with a pedestrian in need of assistance (Chen: Figs 1, 24A; pars 0070, 0078, 0109, 0286, 0501, traffic monitoring to issue amber alerts).
As to claim 10, Chen as modified discloses the system of claim 1, wherein the one or more human activity tasks include body segmentation and attribute identification (Stevens: Figs 49, 50A-50B; pars 0942, 0948, 0992), wherein the body segmentation and the attribute identification are concurrently performed (Stevens: Figs 49, 50A-50B; pars 0942, 0948, 0972, 0977, 0991-0992).
As to claim 11, Chen as modified discloses the system of claim 10, further comprising a customized service system responsive to the rich human analysis system, wherein the customized service system recommends products based upon the body segmentation and the attribute identification (Stevens: pars 0007, 0265, 0645, 01214, 01221).
As to claim 12, it recites a non-transitory CRM storing program executed to perform functions and features of claim 1. Rejection of claim 1 is therefore incorporated herein.
As to claim 13, it is rejected with the same reason as set forth in claim 2.
As to claim 14, it is rejected with the same reason as set forth in claims 3-4.
As to claims 15-20, they are rejected with the same reason as set forth in claim 5-10, respectively.
Examiner’s Note
Examiner has cited particular column, line number, paragraphs and/or figure(s) in the reference(s) as applied to the claims for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the reference(s) in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUN SHEN whose telephone number is (571)270-7927. The examiner can normally be reached on Mon-Fri 8:30-5:50 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/QUN SHEN/
Primary Examiner, Art Unit 2662