Prosecution Insights
Last updated: April 19, 2026
Application No. 18/673,458

COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING DEVICE

Non-Final OA §103
Filed
May 24, 2024
Examiner
HUANG, FRANK F
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
519 granted / 691 resolved
+17.1% vs TC avg
Strong +17% interview lift
Without
With
+17.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
33 currently pending
Career history
724
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
72.0%
+32.0% vs TC avg
§102
3.6%
-36.4% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 691 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Boydston et al. US 12322187 B1 “Boydston”, in view of Crisfalusi et al. ep 4053812 “Crisfalusi” (IDS). Regarding claim 1, BOYDSTON discloses the localization component 624 can use SLAM (simultaneous localization and mapping) or CLAMS (calibration, localization and mapping, simultaneously) to receive time-of-flight data, image data, lidar data, radar data, sonar data, IMU data, GPS data, wheel encoder data, or any combination thereof, and the like to accurately determine a location of the autonomous vehicle. In some instances, the localization component 624 can provide data to various components of the vehicle 602 to determine an initial position of an autonomous vehicle for generating a trajectory. However, BOYDSTON did not discloses causing the electronic device positioned in the first attention region to execute processing regarding environment setting associated with a predetermined behavior, in a case where a first behavior of the specified person is the predetermined behavior as claimed. However, CRISFALUSI discloses a non-transitory computer-readable recording medium storing an information processing program for causing a computer to execute processing comprising: acquiring a video imaged in a facility (CRISFALUSI, ¶ 68, Considering first the functionality of the Trajectory Computation Module (TCM) 122, this module is adapted to receive: (i) video footage (Vid(t)) from the surveillance sensors 102, wherein the video-footage comprises all video frames in which a Tracked Person appears; and (ii) Pathi data set from the HTM 118. The Pathi details the times and locations in the store at which the Tracked Personi was detected by the surveillance sensors 102. As outlined above these locations are established from the co-ordinates of corners of bounding boxes in the video frames in which the Tracked Personi appears and the identity of the surveillance sensor(s) that captured each such video frame); tracking a trajectory of a person (as cited below, TCM) in the facility, by analyzing the acquired video (¶ The Trajectory Computation Module (TCM) 122 considers the positions of all the Tracked Persons in a scene over the time interval [1..Tobs] and predicts their positions over the time interval [TObs+1..Tpred]. It will be appreciated that the input to the sequence corresponds to the observed positions of a Tracked Person in a scene. The output is a sequence predicting the Tracked Person's future positions at different moments in time. This allows the Trajectory Computation Module (TCM) 122 to compute over a predefined time interval, Tconf (e.g. 3 minutes) the trajectory of a tracked customer. The Trajectory Computation Module (TCM) 122 in addition to predicting a trajectory, is further configured to classify the predicted trajectory as suspect or normal. The output of the Trajectory Computation Module (TCM) 122 therefore comprises a predicted trajectory PNG media_image1.png 106 697 media_image1.png Greyscale for each Tracked Person, together with the suspect/normal label of this trajectory ((L(Ĥi ).); generating a heat map regarding the trajectory of the person in the facility, based on the tracked trajectory of the person (¶ 89: Determining over-patrolling is based on the predicted trajectory data Ĥi received from the Trajectory Computation Module (TCM) 122. A calculation is made of the number of times Loopα,βi PNG media_image2.png 94 201 media_image2.png Greyscale Tracked Personi performs a loop between a first pre-defined location Aα and a second pre-defined location Aβ in the store. Similarly, by examining all the previous Tracked Persons' trajectories (also received from the Trajectory Computation Module (TCM) 122 that included the locations Aα and Aβ ,it is possible to calculate the number of loops performed by a Tracked Person between these locations in any given Tracked Person trajectory. From this, it is possible to establish a histogram of the frequency of individual numbers of loops performed between Aα and Aβ by all the previous Tracked Persons in the store. This histogram will be referred to henceforth as the Aα and Aβ loop histogram. To ensure its currency, the Aα and Aβ loop histogram is updated for each Tracked Person entering the store. The computed variable Loopα,βi PNG media_image3.png 94 177 media_image3.png Greyscale is then compared with the Aα and Aβ loop histogram. If Loopα,βi PNG media_image4.png 94 177 media_image4.png Greyscale exceeds a certain percentile (e.g. 80%) of the Aα and Aβ loop histogram, it suggests that Tracked Personi may be over-patrolling the region between Aα and Aβ); generating information (¶ 128, i.e. As described above the activation of the Suspect Activity Modules (SAMs) and the corresponding weighting provides a behaviour or penalty point score) regarding environment setting in the facility (¶ 129, i.e.: As further indicated in Figure 1, an accumulation unit or a Score Update Unit 110 is provided. The Score Update Unit 110 is adapted to receive a Tracked Personi's Suspect Behaviour Score (SBSi ) from the Client Tracking Unit (CTU) 104 and the Tracked Personi's Penalty Points (PPi ) from the Suspect Activity Detection Unit (SADU) 108. The Score Update Unit 110 is adapted to add the Tracked Personi's Penalty Points (PPi ) to the Tracked Personi's Suspect Behaviour Score (SESi ) so that the Tracked Personi's Suspect Behaviour Score (SESi ) effectively provides a running tally of the Tracked Personi's suspect activities/behaviour during their stay in the store.), based on the generated heat map and position information of an electronic device disposed in the facility (¶ 130: When this updated score exceeds a target, a message or alert is triggered. This message is issued by a Message Issuing Unit (MIU) 112. The Message Issuing Unit (MIU) 112 is adapted to issue notifications or alarms when the updated score meets or exceeds a threshold. Severity levels may be set for the alarms. The threshold may be pre-configured and/or adapted to requirements. For example, the Message Issuing Unit (MIU) 112 may be adapted to issue one of three message types, namely "Notification", "Alarm" and "Severe Alarm". A Notification message signals a detected suspicious behaviour; an "Alarm" message signals the detection of a shoplifting behaviour; and a "Severe Alarm" message signals a presumptive shoplifting event detection. The message type triggered is dependent on the number of Penalty Points by which a Tracked Personi's Suspect Behaviour Score (SBSi ) exceeds the above-mentioned threshold. An example configuration is outlined in Figure 6. Defining Th as the threshold; and P as a pre-configurable message trigger, then referring to Figure 6: (i) if 2P ≤ SBSi - Th < 3P the issued message is a "Notification" (ii) if 3P ≤ SBSi - Th < 4P the issued message is a "Alarm" (iii) if BSi - Th ≥ 4P the issued message is a "Severe Alarm"); and causing the electronic device to execute processing regarding the environment setting, based on the information regarding the environment setting (as cited above, ¶ 130, i.e. The Message Issuing Unit (MIU) 112 is adapted to issue notifications or alarms when the updated score meets or exceeds a threshold.). Both BOYDSTON and CRISFALUSI teach systems with tracking a person, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the BOYDSTON disclosure, tracking the person utilizing heat map and AI technologies, as taught by CRISFALUSI. Such inclusion would have increased the usefulness of the system by automating the process of detecting behaviours which may be characterised as shop lifting behaviours, and would have been consistent with the rationale of combining prior art elements according to known methods to yield predictable results to show a prima facie case of obviousness (MPEP 2143(I)(A)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007). Regarding claim 2, BOYDSTON and CRISFALUSI, for the same motivation of combination, further discloses the non-transitory computer-readable recording medium according to claim 1, wherein the information regarding the environment setting is any one of a type of content to be displayed on the electronic device, an illuminance of an illumination device disposed in the facility, a type of music played in the facility by the electronic device, and a type of a perfume to be sprayed in the facility by the electronic device (CRISFALUSI, ¶ 130, i.e. The Message Issuing Unit (MIU) 112 is adapted to issue notifications or alarms when the updated score meets or exceeds a threshold.). Regarding claim 3, BOYDSTON and CRISFALUSI, for the same motivation of combination, further discloses the non-transitory computer-readable recording medium according to claim 1, wherein the processing of analyzing specifies a first movement trajectory of the person in a first period, included in the acquired video, predicts a second movement trajectory of the person in a second period after the first period based on the specified first movement trajectory, and specifies a third movement trajectory that indicates an actual movement trajectory of the person in the second period by analyzing the acquired video, and the processing of generating the heat map generates the heat map that indicates an error between the second movement trajectory and the third movement trajectory for each region (see CRISFALUSI, ¶ 89, i.e. From this, it is possible to establish a histogram of the frequency of individual numbers of loops performed between Aα and Aβ by all the previous Tracked Persons in the store. This histogram will be referred to henceforth as the Aα and Aβ loop histogram. To ensure its currency, the Aα and Aβ loop histogram is updated for each Tracked Person entering the store. The computed variable Loopα,βi PNG media_image3.png 94 177 media_image3.png Greyscale is then compared with the Aα and Aβ loop histogram. If Loopα,βi PNG media_image4.png 94 177 media_image4.png Greyscale exceeds a certain percentile (e.g. 80%) of the Aα and Aβ loop histogram, it suggests that Tracked Personi may be over-patrolling the region between Aα and Aβ). Regarding claim 4, BOYDSTON and CRISFALUSI, for the same motivation of combination, further discloses the non-transitory computer-readable recording medium according to claim 3, for causing the computer to further execute processing of: extracting a first attention region in which an error satisfies a predetermined condition and a person in the first attention region, based on the heat map; specifying a first behavior of the person, based on skeleton information of the person in the extracted first attention region; and causing the electronic device positioned in the first attention region to execute processing regarding environment setting associated with a predetermined behavior (See CRISFALUSI, ¶ 105, i.e. Repeated Tracked Person looping trajectories around or between adjacent and/or facing shelves combined with repeated bending and squatting activities proximal to the shelves is highly indicative of shop-lifting behaviour), in a case where a first behavior of the specified person is the predetermined behavior (¶ 99 Similarly, by combining the action labels Actioni(t) received from the Human Pose Estimation Module (HPEM) 120 and predicted trajectory data Ĥi received from the Trajectory Computation Module (TCM) 122, it is possible to count the number of squatting and/or bending actions performed by other shoppers in the same region of the store as Tracked Personi over a pre-defined observation period (e.g. one-week). From this it is possible to establish the mean (µsq,bnd ) and standard deviation (σsq,bnd ) of the number of squatting and/or bending actions performed by Tracked Persons in this region of the store; and a histogram of the same observable variable. For brevity, this histogram will be referred to henceforth as the squat-bend histogram. To ensure its currency, the squat-bend histogram is updated periodically in accordance with store management requirements. From µsq,bnd and σsq,bnd it is possible to establish a maximum threshold (ThNsq,bnd ) for the number of squatting and/or bending actions performed by a Tracked Person in that region of the store. Alternatively, the maximum threshold (ThNsq,bnd ) could be established with reference to a certain percentile of the squat-bend histogram.). Regarding claim 5, BOYDSTON and CRISFALUSI, for the same motivation of combination, further discloses the non-transitory computer-readable recording medium according to claim 4, for causing the computer to further execute processing of: generating the information regarding the environment setting in the facility, by inputting the first behavior of the person in the first attention region into a machine learning model trained based on a plurality of pieces of training data in which a behavior of a person and information regarding environment setting are set as a pair; specifying a second behavior of the person, by analyzing a video that includes the person in the first attention region, after the processing regarding the environment setting has been executed by the electronic device; and retraining the machine learning model based on remaining training data obtained by excluding training data that corresponds to the pair of the first behavior and the information regarding the environment setting, from the plurality of pieces of training data, in a case where the specified second behavior of the person is not a predetermined behavior (see CRISFALUSI, ¶ 71, It will be appreciated that there are many socially plausible ways that people could move, in the store in the future. To model these socially plausible motion paths, the Trajectory Computation Module (TCM) 122 in one arrangement may use a Social GAN architecture such as that disclosed in Gupta, A., Johnson, J., Fei-Fei, L., Savarese, S., & Alahi, A. (2018). Social GAN: Socially acceptable trajectories with generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2255-2264). This Social GAN architecture observes motion histories and predicts future behaviour. In addition, this Social GAN algorithm is modified to include a trajectory classification (normal/suspect) as output. By training adversarially against a recurrent discriminator, this model can predict socially plausible future movements. However, in the present embodiment, the Social GAN algorithm is modified to include a trajectory classification (normal/suspect) as output. To train this ML algorithm the system counts on some pre-existing manually labelled data on trajectories classified as normal or suspect.). Regarding claim 6, BOYDSTON and CRISFALUSI, for the same motivation of combination, discloses an information processing method implemented by a computer, the information processing method comprising: acquiring a video imaged in a facility (see rejection of claim 1); tracking a trajectory of a person in the facility, by analyzing the acquired video (see rejection of claim 1); generating a heat map regarding the trajectory of the person in the facility, based on the tracked trajectory of the person (see rejection of claim 1); generating information regarding environment setting in the facility, based on the generated heat map and position information of an electronic device disposed in the facility (see rejection of claim 1); and causing the electronic device to execute processing regarding the environment setting, based on the information regarding the environment setting (see rejection of claim 1). Regarding claim 7, BOYDSTON and CRISFALUSI, for the same motivation of combination, discloses an information processing apparatus comprising: a memory; and a processor coupled to the memory, the processor being configured to perform processing comprising: acquiring a video imaged in a facility; tracking a trajectory of a person in the facility, by analyzing the acquired video (see rejection of claim 1); generating a heat map regarding the trajectory of the person in the facility, based on the tracked trajectory of the person (see rejection of claim 1); generating information regarding environment setting in the facility, based on the generated heat map and position information of an electronic device disposed in the facility (see rejection of claim 1); and causing the electronic device to execute processing regarding the environment setting, based on the information regarding the environment setting (see rejection of claim 1). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20120154582 A1 SYSTEM AND METHOD FOR PROTOCOL ADHERENCE US 20100010672 A1 Docking system for a tele-presence robot US 20090125147 A1 REMOTE CONTROLLED ROBOT SYSTEM THAT PROVIDES MEDICAL IMAGES US 20070291128 A1 Mobile teleconferencing system that projects an image provided by a mobile robot US 20070228755 A1 Vehicle observation apparatus Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK F HUANG whose telephone number is (571)272-0701. The examiner can normally be reached Monday-Friday, 8:30 am - 6:00 pm (Eastern Time), Federal Alternative First Friday Off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at (571)272-2988.. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FRANK F HUANG/Primary Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Sep 19, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593052
LOCAL ILLUMINATION COMPENSATION FOR VIDEO ENCODING AND DECODING USING STORED PARAMETERS
2y 5m to grant Granted Mar 31, 2026
Patent 12587725
IMAGE CAPTURING DEVICE AND IMAGE CAPTURING METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12579815
VIDEO SURVEILLANCE SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12574625
SYSTEM WITH LIGHTING CONTROL INCLUDING GROUPED CHANNELS
2y 5m to grant Granted Mar 10, 2026
Patent 12568248
METHOD AND APPARATUS FOR DECODING A VIDEO SIGNAL
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
92%
With Interview (+17.3%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 691 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month