Prosecution Insights
Last updated: April 19, 2026
Application No. 17/524,751

METHOD AND SYSTEM FOR VISUAL ANALYSIS AND ASSESSMENT OF CUSTOMER INTERACTION AT A SCENE

Final Rejection §103
Filed
Nov 12, 2021
Examiner
ANDERSON II, JAMES M
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Briefcam Ltd.
OA Round
10 (Final)
75%
Grant Probability
Favorable
11-12
OA Rounds
2y 11m
To Grant
85%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
513 granted / 684 resolved
+17.0% vs TC avg
Moderate +10% lift
Without
With
+10.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
715
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 684 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims 2. Claims 1-6, 9-20 and 22-30 are currently pending with independent claims 1, 15 and 20 being amended. Claim 8 has been canceled without prejudice. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4-5, 10-16, 18-20, 22-23, 25-26 and 28-29 are rejected under 35 U.S.C. 103 as being unpatentable over Fairbanks (US 20150363735 A1) in view of Subramanian et al. (US 20220083767 A1), further in view of Kovach et al. (US 20190080274 A1) and Buban (US 20110302293 A1). Concerning claim 1, Fairbanks teaches a method for visual analysis of customer interaction at a scene, the method comprising: receiving a plurality of video sequences each comprising a sequence of frames, captured by one or more stationary cameras at known locations covering at least a portion of the scene, said plurality of video sequences including at least one staff person and at least one customer (¶¶0070-0072; ¶0104); detecting, using at least one computer processor, persons in the at least one video sequence (¶¶0070-0072); classifying, using the at least one computer processor, the persons to customers, and to staff persons (¶¶0070-0072); calculating a signature for at least one of the staff persons ,enabling a recognition of said at least one of the staff persons appearing in other frames of the plurality of video sequences (¶0073); carrying out a visual analysis, using the at least one computer processor and based only on at least one video sequence of the plurality of video sequences, in which both at least one of the staff persons and at least one of the customers appear, to compute respective locations of the persons in the scene, and detect a plurality of interactions between at least one of the staff persons and at least one of the customers which are visible at the at least one video sequence, using the respective locations of the persons and the visual analysis (¶¶0069-0070; ¶¶0100-0101); recording interaction data descriptive of each of the detected interactions; (¶¶0101-0102, 0105; fig. 9: steps 905-915 – interaction between a customer and personnel associated with the business such as a simple greeting and/or an exchange may correspond to the claimed sequence of gestures, wherein said personnel aiding the customer in finding a particular product and/or being unhelpful or unkind to the customer may correspond to the claimed sequence of postures), and analyzing, using the at least one computer processor, the interaction data, thereby providing statistics related to the interaction data associated with the staff person (figure 10, step 1025 or 1030). Fairbanks fails to explicitly teach carrying out a visual analysis, based on applying computer vision algorithms. Subramanian et al. (hereinafter Subramanian) teaches ) teaches a method to provide real time interior analytics, wherein machine learning models or computer vision techniques are used to determine real-time inferences based on captured video frames (¶0030). The real-time inferences may be determining if an employee is interacting with a customer (fig. 2B; ¶0050; ¶0052; ¶0057). Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the computer vision algorithms of Subramanian into Fairbanks in order to carry out the visual analysis to determine real-time inferences (Subramanian, ¶0030). Fairbanks and Subramanian fail to explicitly teach wherein the signature is a sequence of numbers calculated from an image or a video, wherein the signature is a sequence of numbers calculated from an image or a video, wherein the signature is calculated such that similar objects yield similar signatures, wherein the signature is calculated such that similar objects yield similar signatures, wherein the signature is computed using neural network which is pre-trained on a plurality of images and videos of persons. However, Kovach et al. (hereinafter Kovach) teaches tracking and/or analyzing facility-related activities, that identifies a particular worker in an image, wherein the signature is a sequence of numbers calculated from an image or a video, wherein the signature is calculated such that similar objects yield similar signatures, wherein the signature is calculated such that similar objects yield similar signatures (¶0070: an employee (i.e., staff person) identification number is considered a sequence of numbers the is calculated from the image. Employees of the same facility (similar objects) are considered to yield similar signatures (specific employee identification of another employee)), wherein the signature is computed using neural network which is pre-trained on a plurality of images and videos of persons (¶0064; ¶0067; ¶0079: facility analytics platform 205 may have been trained on a training set of data (e.g., using machine learning, artificial intelligence, and/or the like)). Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the teachings of Kovach in to the Fairbanks and Subramanian in order to uniquely identify each individual in a facility (Kovach, ¶0070). Not explicitly taught by Fairbanks, Subramanian, and Kovach is the method, wherein at least one visual analysis visible interaction between at least one staff person present at the scene and the at least one customer is derived, based on said respective locations of the persons, from a sequence of at least one of postures and gestures of a skeleton representation of the staff person and a skeleton representation of the customer, wherein each skeleton representation comprises a simplified model of a human body, represented by straight lines connected by joints to represent major body parts, and wherein the skeleton representations correspond to whole-body skeleton representations of the staff person and the customer. Buban teaches a recognition system for sharing information, wherein at least one visual analysis visible interaction between at least one person present at a scene and at least one other person present at the scene is derived, based on said respective locations of the persons, from a sequence of at least one of postures and gestures of a skeleton representation of the at least one person and a skeleton representation of the at least one other person, wherein each skeleton representation comprises a simplified model of a human body, represented by straight lines connected by joints to represent major body parts, and wherein the skeleton representations correspond to whole-body skeleton representations of the at least one person and the at least one other person (figs. 8-9: whole-body skeletal models 620A & 630A; ¶¶0098-0101 - Interactions between users within the field of view of a camera may comprise physical interactions between two skeletal models that are in close proximity. Any number of other types of interactions be recognized as defining a gesture that is recognized as an interaction.). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the features of Buban into the customer movement and/or interaction modules of Fairbanks in order to detect if a particular posture and/or gesture has taken place between the staff person and the customer (Buban, ¶¶0098-0101). Such a modification is merely a simple substitution of one known element for another to obtain predictable results. Concerning claim 2, Fairbanks further teaches the method of claim 1, further comprising obtaining customer data relating to the at least one customer, said customer data comprising at least one of: data of the at least one customer extracted from data sources other than the at least one video sequence, or data of the at least one customer extracted from the at least one video sequence, wherein the visual analysis is further based on said customer data (¶0073: voice and/or facial recognition). Concerning claim 4, Fairbanks further teaches the method of claim 1, wherein at least one of the one or more cameras are cameras pre-installed in fixed locations (¶0104). Concerning claim 5, Fairbanks further teaches the method of claim 1, wherein said customer interaction of the at least one customer comprises movement pattern of the at least one customer at said scene (¶0072: tracking customer interactions (e.g., a simple greeting or more involved actions) via video data). Concerning claim 10, Fairbanks further teaches the method of claim 1, wherein the customer interaction of the customer is derived based on visual analysis carried out based on the recognition of said at least one customer in said one or more video sequence (¶0073: voice and/or facial recognition). Concerning claim 11, Fairbanks teaches the method of claim 1, further comprising classifying, using the at least one computer processor, the persons to at least one staff person (¶0073). Concerning claim 12, Fairbanks further teaches the method of claim 11, wherein the at least one visible interaction between at least one staff person present at the scene and the at least one customer, is based on at least one video sequence in which both the staff person and the customer appear (¶¶0073, 0101). Concerning claim 13, Fairbanks teaches the method of claim 1, further comprising generating a report, based on the indication of the interaction between said staff person and the at least one customer, and providing said report in a format usable for observing performance of the at least one staff person (¶¶0077, 0079-0080). Concerning claim 14, Fairbanks teaches the method of claim 1, further comprising generating a report, based on the indication of the interaction between said staff person and the at least one customer, and providing said report in a format usable for the at least one staff person to improve the interaction with the customer (¶0108: using a poor rating to rectify the situation with the customer). Claim 15 is the corresponding system to the method of claim 1 and is rejected under the same rationale. Claim 16 is the corresponding system to the method of claim 2 and is rejected under the same rationale. Claim 18 is the corresponding system to the method of claim 4 and is rejected under the same rationale. Claim 19 is the corresponding system to the method of claim 5 and is rejected under the same rationale. Claim 20 is the corresponding non-transitory computer readable medium to the method of claim 1 and is rejected under the same rationale. Concerning claim 22, Fairbanks in view of Subramanian, further in view of Kovach teaches the method according to claim 1. Subramanian further teaches a method to provide real time interior analytics, wherein the interaction data comprises an appearance time of the customer (fig. 5: 504-508; ¶¶0057-0061), and a duration from the appearance time of customer until a start time of the interaction (¶0033; ¶0051; fig. 3A). Concerning claim 23, Fairbanks in view of Subramanian, further in view of Kovach teaches the method according to claim 1. Subramanian further teaches a method to provide real time interior analytics, wherein the interaction data comprises a duration of the interaction (¶0052: tracking the amount of time the customer 114 is interacting with the employee). Claim 25 is the corresponding system to the method of claim 22 and is rejected under the same rationale. Claim 26 is the corresponding system to the method of claim 23 and is rejected under the same rationale. Claim 28 is the corresponding system to the method of claim 22 and is rejected under the same rationale. Claim 29 is the corresponding non-transitory computer readable medium to the method of claim 23 and is rejected under the same rationale. Claims 3 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Fairbanks (US 20150363735 A1) in view of Subramanian et al. (US 20220083767 A1), further in view of Kovach et al. (US 20190080274 A1) and Buban (US 20110302293 A1) and Lewis (US 20180285802 A1). Concerning claim 3, Fairbanks in view Subramanian, Kovach and Buban teaches the method of claim 1. Not explicitly taught is the method, wherein at least one of the one or more cameras is mounted on the staff person. Lewis et al. (hereinafter Lewis), in the same field of endeavor, teaches tracking associate interactions with customers, wherein at least one of the one or more cameras is mounted on the staff person (¶0055). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the features of Lewis into the teachings of Fairbanks in order to detect if a particular posture and/or gesture has taken place (Lewis, ¶0055). Claim 17 is the corresponding system to the method of claim 3 and is rejected under the same rationale. Claims 6, 9, 24, 27 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Fairbanks (US 20150363735 A1) in view of Subramanian et al. (US 20220083767 A1), further in view of Kovach et al. (US 20190080274 A1) and Buban (US 20110302293 A1) and Bondareva et al. (US 20210287226 A1). Concerning claim 6, Fairbanks in view of Subramanian, further in view of Kovach and Buban teaches the method of claim 1. Not explicitly taught is the method, wherein said customer interaction of the at least one customer comprises an interaction of at least one customer with goods displayed for sale at said scene. Bondareva et al. (hereinafter Bondareva) teaches a method of managing transactions in physical retail stores, wherein said customer interaction of the at least one customer comprises an interaction of at least one customer with goods displayed for sale at said scene (¶0041; ¶0054; ¶0065). Taking the teachings of Fairbanks, Subramanian, Kovach, Buban and Bondareva, together as a whole, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the teachings of Bondareva in order to keep track of items the customer interacted with (Bondareva,¶0041). Concerning claim 9, Fairbanks in view of Subramanian, further in view of Kovach and Buban teaches the method of claim 1. Not explicitly taught is the method, wherein the interaction between said staff person and the at least one customer corresponds with no interaction. Bondareva et al. (hereinafter Bondareva) teaches a method of managing transactions in physical retail stores, wherein the interaction between said staff person and the at least one customer corresponds with no interaction (fig. 2B: 230-235; ¶¶0065-0066: determining the customer is attempting to leave without picking up the transaction item (e.g., not returning to pick up a prescription from the pharmacy or meat from the butcher)). Taking the teachings of Fairbanks, Subramanian, Kovach, Buban and Bondareva, together as a whole, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the teachings of Bondareva in order to record the relevant movements and activities (i.e., interactions) of the staff person and customers. Concerning claim 24, Fairbanks in view of Subramanian, further in view of Kovach and Buban teaches the method according to claim 1. Fairbanks further teaches tracking customer interactions throughout a business (¶0004). Not explicitly taught is the method, wherein the interaction data comprises detection of the customer leaving the scene without an interaction with the staff person. Bondareva describes a method of managing intangible shopping transactions, wherein a customer may electronically make a request for a product (e.g., sending an electronic prescription to a pharmacy) and the method comprises means for detecting a customer leaving the store without the item they requested (¶¶0011-0013; ¶0043; ¶0066). That is to say, Bondareva’s method comprises detection of the customer leaving the scene (e.g., leaving the pharmacy area or the store) without an interaction with the staff person (e.g., picking up the requested item from the pharmacist). Taking the teachings of Fairbanks, Subramanian, Kovach, Buban and Bondareva, together as a whole, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the teachings of Bondareva in order to record the relevant movements and activities (i.e., interactions) of the staff person and customers. Claim 27 is the corresponding system to the method of claim 24 and is rejected under the same rationale. Claim 30 is the corresponding non-transitory computer readable medium to the method of claim 24 and is rejected under the same rationale. Response to Arguments Applicant’s arguments, see page 9 of the remarks, filed 02/12/2026, with respect to the cancellation of claim 8 have been fully considered and are persuasive. The rejection under 35 U.S.C. § 112 has been withdrawn. Applicant’s arguments, see pages 9-12 of the remarks, filed 02/12/2026, with respect to the rejections of claim 1-6, 8-20 and 22-30 under 35 U.S.C. § 103 have been fully considered, but are moot in view of new grounds of rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES M ANDERSON II whose telephone number is (571)270-1444. The examiner can normally be reached Monday - Friday 10AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN PENDLETON can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /James M Anderson II/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Nov 12, 2021
Application Filed
Feb 26, 2022
Non-Final Rejection — §103
Jun 30, 2022
Response Filed
Aug 05, 2022
Final Rejection — §103
Nov 09, 2022
Request for Continued Examination
Nov 14, 2022
Response after Non-Final Action
Nov 19, 2022
Non-Final Rejection — §103
Feb 27, 2023
Response Filed
Mar 25, 2023
Final Rejection — §103
Apr 13, 2023
Request for Continued Examination
Apr 17, 2023
Response after Non-Final Action
Jun 18, 2023
Non-Final Rejection — §103
Sep 21, 2023
Response Filed
Oct 24, 2023
Final Rejection — §103
Apr 30, 2024
Request for Continued Examination
May 08, 2024
Response after Non-Final Action
Jun 01, 2024
Non-Final Rejection — §103
Dec 06, 2024
Response Filed
Jan 25, 2025
Final Rejection — §103
Jul 23, 2025
Request for Continued Examination
Jul 29, 2025
Response after Non-Final Action
Aug 09, 2025
Non-Final Rejection — §103
Feb 12, 2026
Response Filed
Mar 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561976
COMMENT GENERATION DEVICE AND COMMENT GENERATION METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12548437
SYSTEMS AND METHODS FOR POLICY CENTRIC DATA RETENTION IN TRAFFIC MONITORING
2y 5m to grant Granted Feb 10, 2026
Patent 12537949
METHODS AND APPARATUS FOR KERNEL TENSOR AND TREE PARTITION BASED NEURAL NETWORK COMPRESSION FRAMEWORK
2y 5m to grant Granted Jan 27, 2026
Patent 12534313
CAMERA-ENABLED LOADER SYSTEM AND METHOD
2y 5m to grant Granted Jan 27, 2026
Patent 12525019
INTELLIGENT AI SYSTEM FOR RAPID WEAPON THREAT ASSESSMENT IN VIDEO STREAMS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

11-12
Expected OA Rounds
75%
Grant Probability
85%
With Interview (+10.4%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 684 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month