Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Summary
This This office action for US Patent application 18/795666 is responsive to communications filed on August 6th, 2024. Currently, claims 1-16 are pending are presented for examination.
Claim Objections
Claim 1-16 is/are objected because all its language is the same as the claims 1-16 of the application 17/306148. The applicant is suggested to double-check this matter.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 1-16 inprovisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-16 of copending Application No 17/306148 which is U.S. Patent No. 11,580,648.
This is an inprovisional obviousness-type double patenting rejection because the conflicting claims have in fact been patented. Below is a list of limitations that perform the same function.
Conflicting Co-pending Application 17/306148
Instant Application-18/795666-Note* bold means difference in the instant application
1. A visual tracking system for tracking and identifying a plurality of persons within a customer-oriented monitored location, comprising: one or more customer-oriented devices adapted to carry out one or more action options; a first camera adapted to capture a sequence of video frames comprising a current video frame and a prior video frame, each video frame depicting a plurality of detections, each detection corresponding to a portion of the video frame depicting one of the persons, each detection further having visual features, and motion data describing relative movement of the person within the video frame; a person featurizer adapted to generate a person feature vector for each detection within the current video frame describing the visual features of the detection, the person featurizer having a plurality of convolutional layers, with each convolutional layer adapted to detect one of the visual features, the person featurizer further having a plurality of sentiment hidden layers each adapted to detect one or more emotional states to produce sentiment data; a tracking module, the tracking module is adapted to define one or more incumbent tracks, each incumbent track is a track identity associated with one of the persons depicted in the prior video frame, and has an incumbent track person feature vector describing the visual features of the person, the incumbent track further having incumbent track motion data, the tracking module is further adapted to establish a predictive pairing between each detection and each incumbent track and calculate a likelihood value for each predictive pairing, the likelihood value represents a probability that the person associated with the detection corresponds to the person associated with the incumbent track, the likelihood value for each predictive pairing is obtained by combining a motion prediction probability comparing the motion data of the detection with the incumbent track motion data, and a featurization similarity probability comparing the person feature vector of the detection with the incumbent track person feature vector, the tracking module is further adapted to maintain the track identity of each person in the current frame by utilizing a combinatorial optimization to select one of the predictive pairings for each detection such that the likelihood values are maximized across all the selected predictive pairings; and a recommendation module adapted to extract context data for each person by analyzing the feature vector of the person, and identify a customer need for the person using recommendation input, the recommendation input comprising the context data along with the sentiment data obtained by analyzing the feature vector of the person using the person featurizer, the recommendation module is further adapted to generate an action recommendation based on the customer need and the action options using one or more recommendation hidden layers, and cause one of the customer- oriented devices to perform a customer oriented action in accordance with the action recommendation.
Claims 2-16
1. A visual tracking system for tracking and identifying a plurality of persons within a customer-oriented monitored location, comprising: one or more customer-oriented devices adapted to carry out one or more action options; a first camera adapted to capture a sequence of video frames comprising a current video frame and a prior video frame, each video frame depicting a plurality of detections, each detection corresponding to a portion of the video frame depicting one of the persons, each detection further having visual features, and motion data describing relative movement of the person within the video frame; a person featurizer adapted to generate a person feature vector for each detection within the current video frame describing the visual features of the detection, the person featurizer having a plurality of convolutional layers, with each convolutional layer adapted to detect one of the visual features, the person featurizer further having a plurality of sentiment hidden layers each adapted to detect one or more emotional states to produce sentiment data; a tracking module, the tracking module is adapted to define one or more incumbent tracks, each incumbent track is a track identity associated with one of the persons depicted in the prior video frame, and has an incumbent track person feature vector describing the visual features of the person, the incumbent track further having incumbent track motion data, the tracking module is further adapted to establish a predictive pairing between each detection and each incumbent track and calculate a likelihood value for each predictive pairing, the likelihood value represents a probability that the person associated with the detection corresponds to the person associated with the incumbent track, the likelihood value for each predictive pairing is obtained by combining a motion prediction probability comparing the motion data of the detection with the incumbent track motion data, and a featurization similarity probability comparing the person feature vector of the detection with the incumbent track person feature vector, the tracking module is further adapted to maintain the track identity of each person in the current frame by utilizing a combinatorial optimization to select one of the predictive pairings for each detection such that the likelihood values are maximized across all the selected predictive pairings; and a recommendation module adapted to extract context data for each person by analyzing the feature vector of the person, and identify a customer need for the person using recommendation input, the recommendation input comprising the context data along with the sentiment data obtained by analyzing the feature vector of the person using the person featurizer, the recommendation module is further adapted to generate an action recommendation based on the customer need and the action options using one or more recommendation hidden layers, and cause one of the customer- oriented devices to perform a customer oriented action in accordance with the action recommendation.
Claims 2-16
Claims 1-16 inprovisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-16 respectively of copending Application 17/306148.
Conclusion
Any inquiry concerning this communication or earlier communications form the examiner should be directed to Nam Pham, whose can be contacted by phone at (571)270-7352. The examiner can normally be reached on Mon—Thurs.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, CZEKAJ DAVID, can be reached on (571)272-7327.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) AT 866-217-9197 (too free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NAM D PHAM/ Primary Examiner, Art Unit 2487