Prosecution Insights
Last updated: April 19, 2026
Application No. 18/795,666

SYSTEM AND METHOD FOR VISUALLY TRACKING PERSONS AND IMPUTING DEMOGRAPHIC AND SENTIMENT DATA

Non-Final OA §DP
Filed
Aug 06, 2024
Examiner
PHAM, NAM D
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Radiusai Inc.
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
92%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
481 granted / 530 resolved
+32.8% vs TC avg
Minimal +1% lift
Without
With
+1.2%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
34 currently pending
Career history
564
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
32.3%
-7.7% vs TC avg
§102
28.4%
-11.6% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 530 resolved cases

Office Action

§DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Summary This This office action for US Patent application 18/795666 is responsive to communications filed on August 6th, 2024. Currently, claims 1-16 are pending are presented for examination. Claim Objections Claim 1-16 is/are objected because all its language is the same as the claims 1-16 of the application 17/306148. The applicant is suggested to double-check this matter. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). Claims 1-16 inprovisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-16 of copending Application No 17/306148 which is U.S. Patent No. 11,580,648. This is an inprovisional obviousness-type double patenting rejection because the conflicting claims have in fact been patented. Below is a list of limitations that perform the same function. Conflicting Co-pending Application 17/306148 Instant Application-18/795666-Note* bold means difference in the instant application 1. A visual tracking system for tracking and identifying a plurality of persons within a customer-oriented monitored location, comprising: one or more customer-oriented devices adapted to carry out one or more action options; a first camera adapted to capture a sequence of video frames comprising a current video frame and a prior video frame, each video frame depicting a plurality of detections, each detection corresponding to a portion of the video frame depicting one of the persons, each detection further having visual features, and motion data describing relative movement of the person within the video frame; a person featurizer adapted to generate a person feature vector for each detection within the current video frame describing the visual features of the detection, the person featurizer having a plurality of convolutional layers, with each convolutional layer adapted to detect one of the visual features, the person featurizer further having a plurality of sentiment hidden layers each adapted to detect one or more emotional states to produce sentiment data; a tracking module, the tracking module is adapted to define one or more incumbent tracks, each incumbent track is a track identity associated with one of the persons depicted in the prior video frame, and has an incumbent track person feature vector describing the visual features of the person, the incumbent track further having incumbent track motion data, the tracking module is further adapted to establish a predictive pairing between each detection and each incumbent track and calculate a likelihood value for each predictive pairing, the likelihood value represents a probability that the person associated with the detection corresponds to the person associated with the incumbent track, the likelihood value for each predictive pairing is obtained by combining a motion prediction probability comparing the motion data of the detection with the incumbent track motion data, and a featurization similarity probability comparing the person feature vector of the detection with the incumbent track person feature vector, the tracking module is further adapted to maintain the track identity of each person in the current frame by utilizing a combinatorial optimization to select one of the predictive pairings for each detection such that the likelihood values are maximized across all the selected predictive pairings; and a recommendation module adapted to extract context data for each person by analyzing the feature vector of the person, and identify a customer need for the person using recommendation input, the recommendation input comprising the context data along with the sentiment data obtained by analyzing the feature vector of the person using the person featurizer, the recommendation module is further adapted to generate an action recommendation based on the customer need and the action options using one or more recommendation hidden layers, and cause one of the customer- oriented devices to perform a customer oriented action in accordance with the action recommendation. Claims 2-16 1. A visual tracking system for tracking and identifying a plurality of persons within a customer-oriented monitored location, comprising: one or more customer-oriented devices adapted to carry out one or more action options; a first camera adapted to capture a sequence of video frames comprising a current video frame and a prior video frame, each video frame depicting a plurality of detections, each detection corresponding to a portion of the video frame depicting one of the persons, each detection further having visual features, and motion data describing relative movement of the person within the video frame; a person featurizer adapted to generate a person feature vector for each detection within the current video frame describing the visual features of the detection, the person featurizer having a plurality of convolutional layers, with each convolutional layer adapted to detect one of the visual features, the person featurizer further having a plurality of sentiment hidden layers each adapted to detect one or more emotional states to produce sentiment data; a tracking module, the tracking module is adapted to define one or more incumbent tracks, each incumbent track is a track identity associated with one of the persons depicted in the prior video frame, and has an incumbent track person feature vector describing the visual features of the person, the incumbent track further having incumbent track motion data, the tracking module is further adapted to establish a predictive pairing between each detection and each incumbent track and calculate a likelihood value for each predictive pairing, the likelihood value represents a probability that the person associated with the detection corresponds to the person associated with the incumbent track, the likelihood value for each predictive pairing is obtained by combining a motion prediction probability comparing the motion data of the detection with the incumbent track motion data, and a featurization similarity probability comparing the person feature vector of the detection with the incumbent track person feature vector, the tracking module is further adapted to maintain the track identity of each person in the current frame by utilizing a combinatorial optimization to select one of the predictive pairings for each detection such that the likelihood values are maximized across all the selected predictive pairings; and a recommendation module adapted to extract context data for each person by analyzing the feature vector of the person, and identify a customer need for the person using recommendation input, the recommendation input comprising the context data along with the sentiment data obtained by analyzing the feature vector of the person using the person featurizer, the recommendation module is further adapted to generate an action recommendation based on the customer need and the action options using one or more recommendation hidden layers, and cause one of the customer- oriented devices to perform a customer oriented action in accordance with the action recommendation. Claims 2-16 Claims 1-16 inprovisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-16 respectively of copending Application 17/306148. Conclusion Any inquiry concerning this communication or earlier communications form the examiner should be directed to Nam Pham, whose can be contacted by phone at (571)270-7352. The examiner can normally be reached on Mon—Thurs. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, CZEKAJ DAVID, can be reached on (571)272-7327. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) AT 866-217-9197 (too free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NAM D PHAM/ Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Aug 06, 2024
Application Filed
Oct 24, 2025
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604031
SYSTEM AND METHOD FOR COMBINED INTER AND INTRA PREDICTION
2y 5m to grant Granted Apr 14, 2026
Patent 12598289
METHOD AND APPARATUS FOR ENCODING/DECODING IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12587644
Transforms on Non-dyadic Blocks
2y 5m to grant Granted Mar 24, 2026
Patent 12581058
GEOMETRIC PARTITION MODE WITH MOTION VECTOR REFINEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12581056
METHOD AND APPARATUS FOR ENCODING/DECODING IMAGE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
92%
With Interview (+1.2%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 530 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month