Prosecution Insights
Last updated: April 19, 2026
Application No. 18/645,459

NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS

Non-Final OA §102
Filed
Apr 25, 2024
Examiner
SHAH, UTPAL D
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
652 granted / 743 resolved
+25.8% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
16 currently pending
Career history
759
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
30.2%
-9.8% vs TC avg
§102
30.0%
-10.0% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 743 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 2, 3, 5, 7, 9 and 12-14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by JP2018201176A by Nakamura et al. (hereinafter ‘Nakamura’). Translation of the Japanese application used for this rejection is included with office action. In regards to claim 1, Nakamura teaches a non-transitory computer-readable recording medium having stored therein an information processing program that causes a computer to execute a process, the process comprising: acquiring a video that is captured by one or more camera apparatuses; (See Nakamura page 2, Nakamura teaches capturing videos using cameras.) identifying a relationship for identifying a behavior between an object and a person included in the video by analyzing the acquired video; (See Nakamura page 4, Nakamura teaches determining relationships between persons in identified in the videos) determining whether the person has performed an abnormal behavior on a product on an outside of an imaging range of the camera apparatus based on the identified relationship; and giving an alert based on a determination result on whether the person has performed the abnormal behavior on the product on the outside of the imaging range. (See Nakamura page 5, Nakamura teaches determining abnormal behavior such as separation from parent and outputting an alert.) In regards to claim 2, Nakamura teaches the process further including: acquiring a plurality of videos that are captured by a plurality of camera apparatuses installed in a store and that include different areas captured by the plurality of camera apparatuses; identifying a first relationship for identifying a correlation between an object and the person included in a video in which a first area is captured by analyzing the video in which the first area is captured among the plurality of acquired videos; identifying a second relationship for identifying a correlation between an object and the person included in a video in which a second area is captured by analyzing the video in which the second area is captured among the plurality of acquired videos; (See Nakamura page 4, Nakamura teaches determining second relationship using second cameras.) determining whether the person has performed an abnormal behavior on a product in an area that is located between the first area and the second area and that is located on an outside of imaging ranges of the plurality of camera apparatuses based on the first relationship and the second relationship; and giving an alert if it is determined that the person has performed the abnormal behavior. (See Nakamura page 5, Nakamura teaches determining abnormal behavior such as separation from parent and outputting an alert.) In regards to claim 3, Nakamura teaches wherein a time of the video in which the second area is captured is later than a time of the video in which the first area is captured. (See Nakamura page 3). In regards to claim 5, Nakamura teaches the process further including: identifying a first person for whom the identified relationship temporally changes from a first relationship to a second relationship based on the acquired video, wherein the determining whether the person has performed an abnormal behavior on the product includes determining whether the first person has performed an abnormal behavior on the product on the outside of the imaging range based on the identified relationship. (See Nakamura page 8) In regards to claim 7, Nakamura the process further including: identifying an area that is an area in which the person has performed an abnormal behavior on the product, that is located between the first area and the second area, and that is located on an outside of imaging ranges of the plurality of camera apparatuses, based on the plurality of the camera apparatuses that have performed image capturing, wherein the giving the alert includes giving the alert indicating occurrence of abnormality on the product in association with the identified area that is located on the outside of the imaging ranges of the plurality of camera apparatuses. (See Nakamura page 5). In regards to claim 9, Nakamura teaches wherein the identifying the first person includes generating a scene graph that identifies the relationship for each of the persons included in the video by inputting the acquired video to a machine learning model; and identifying the first person by analyzing the scene graph. (See Nakamura page 4). In regards to claim 12, Nakamura teaches the process further including: identifying a position of the person included in each of the videos that are captured by the respective camera apparatuses 100 by a first index that is different for each of the camera apparatuses; identifying the positions of the persons identified by the first indices by using a second index that is common among the plurality of camera apparatuses; and determining whether the persons included in the respective videos are an identical person based on the positions of the persons identified by using the second index. (See Nakamura page 4). Claims 13 and 14 recite limitations that are similar to that of claim 1. Therefore, claims 13 and 14 are rejected similarly as claim 1. Allowable Subject Matter Claims 4, 6, 8 and 10-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: In regards to claims 4, 6, 8 and 10-11, the applied art does not teach or suggest the claimed limitations: In regards to claim 4, Nakamura does not teach wherein the first relationship indicates that the person holds the product and a predetermined object that is used for shoplifting of the product, and the second relationship indicates that the person holds the predetermined object that is used for shoplifting of the product, the process further including: determining, when the predetermined object that is held by the person in the first relationship is also held in the second relationship and when the product that is held in the person in the first relationship is not held in the second relationship, that the person has performed an abnormal behavior on the product in an area that is located between the first area and the second area and that is located on an outside of imaging ranges of the plurality of camera apparatuses. In regards to claim 6, Nakamura does not teach wherein the identifying the relationship includes identifying, from the video, a first area including the object, a second area including the person, and a first relationship for identifying a correlation between the object included in the first area and the person included in the second area by inputting the acquired video to a machine learning model; and identifying, from the video, a third area including the object, a fourth area including the person, and a second relationship for identifying a correlation between the object included in the third area and the person included in the fourth area by inputting the acquired video to a machine learning model, and the determining whether the person has performed an abnormal behavior on the product includes, when the person included in the second area and the person included in the fourth area are identical, determining whether the person has performed an abnormal behavior on the product by comparing the identified first relationship, the identified second relationship, and a rule that is set in advance. In regards to claim 8, Nakamura does not teach wherein the determining whether the person has performed an abnormal behavior on the product includes determining whether the person has performed an abnormal behavior including one of shoplifting and a behavior that leads to shoplifting on the product on an outside of an imaging range of the camera apparatus based on the identified first relationship and the identified second relationship. In regards to claim 10, Nakamura does not teach wherein the identifying the relationship includes extracting a first feature value that corresponds to one of the object and the person from the video; detecting the object and the person included in the video from the extracted first feature value; generating a second feature value that is a combination of the plurality of detected objects, the plurality of detected persons, and the first feature value of one of the object and the person in at least a single pair of the object and the person; generating a first map that indicates the plurality of objects, the plurality of persons, and the relationship for identifying at least a single correlation between the object and the person based on the first feature value and the second feature value; extracting a fourth feature value based on a third feature value that is obtained by converting the first feature value and based on the first map; and identifying the relationship from the fourth feature value. In regards to claim 11, Nakamura does not teach wherein the identifying the relationship includes generating skeleton information on the person by analyzing the acquired video; identifying the first relationship based on the generated skeleton information; and identifying the second relationship based on the generated skeleton information. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to UTPAL D SHAH whose telephone number is (571)272-5729. The examiner can normally be reached M-F: 7:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UTPAL D SHAH/Primary Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Apr 25, 2024
Application Filed
Feb 20, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602948
Generating Computer Augmented Maps from Physical Maps
2y 5m to grant Granted Apr 14, 2026
Patent 12602914
PROVIDING USER GUIDANCE TO USE AND TRAIN A GENERATIVE ADVERSARIAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12597242
DETERMINING EMITTER IDENTIFICATION INFORMATION TO A DESIRED ACCURACY
2y 5m to grant Granted Apr 07, 2026
Patent 12597088
QUALITY FACTOR USING RECONSTRUCTED IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12597151
SYSTEMS AND METHODS TO DETERMINE VEGETATION ENCROACHMENT ALONG A RIGHT-OF-WAY
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+11.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 743 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month