Prosecution Insights
Last updated: April 19, 2026
Application No. 18/574,739

ABNORMALITY JUDGMENT DEVICE, ABNORMALITY JUDGMENT METHOD, AND ABNORMALITY JUDGMENT PROGRAM

Non-Final OA §102§103
Filed
Dec 27, 2023
Examiner
OMETZ, RACHEL ANNE
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
18 granted / 26 resolved
+7.2% vs TC avg
Strong +30% interview lift
Without
With
+30.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
3.1%
-36.9% vs TC avg
§103
62.1%
+22.1% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-8, 10-14, and 16-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tanaka (JP-6854959-B1). Regarding claim 1, Tanaka teaches an abnormality determination device comprising a processor (“CPU” Para [0041]) configured to execute operations comprising: detecting an appearance feature of an object (“subject”) near a person (“The type of subject is represented by information that can identify the subject based on at least one of its shape and color, such as a canned drink, knife, or shopping basket,” Para [0029]) and an appearance of the person (“The person detection unit 132 detects the attributes of a person by determining whether or not the person is an employee based on, for example, the pattern or color of the work clothes or uniform,” Para [0052]), person region information of a region representing the person (“The person detection unit 132 detects the position of a person included in a captured image, and further detects the movement of the person whose position has been detected,” Para [0051]), and object region information of a region representing the object (“The subject detection unit 133 detects the position of a subject included in a captured image,” Para [0053]) from video data representing a motion of the person (“The image acquisition unit may acquire multiple captured image data created in chronological order,” Para [0010]); extracting a motion (“movement”) feature of a motion of the person based on the video data and the person region information (“a person detection unit that detects the movement of a person included in the captured image,” Para [0006]); extracting a relational feature (“relative relationship”) indicating a relationship between the object and the person based on the object region information and the person region information (“When person M picks up object P2 and then puts object P2 into bag B, the behavior estimation device 10 determines the relative relationship as "person M's hand in contact with object P2 is close to bag B" based on the distance between person M's hand and the subject,” Para [0030]); and determining whether the motion of the person is abnormal (the person is “shoplifting”) based on the appearance feature (“subject types”), the motion feature, and the relational feature (“based on the identified relative relationship and the combination of the subject types of canned drink and handbag, and thereby estimates that person M's behavior is shoplifting,” Para [0030]). Regarding claim 2, the rejection of claim 1 is incorporated herein. Tanaka teaches the device of claim 1, and wherein the appearance feature includes a feature of appearance of each of the objects (“The subject detection unit detects a first subject and a second subject included in the captured image,” Para [0013]) and a feature of appearance of the person (“The person detection unit 132 detects the attributes of a person by determining whether or not the person is an employee based on, for example, the pattern or color of the work clothes or uniform,” Para [0052]), which are obtained when an object type is determined (“The type of subject is represented by information that can identify the subject based on at least one of its shape and color, such as a canned drink, knife, or shopping basket,” Para [0029]). Regarding claim 3, the rejection of claim 1 is incorporated herein. Tanaka teaches the device of claim 1, and wherein the motion feature is a feature extracted by a motion recognition model (“person detection unit”) for recognizing a motion represented by video data (“The person detection unit 132 detects the position of a person included in a captured image, and further detects the movement of the person whose position has been detected,” Para [0051]). Regarding claim 4, the rejection of claim 1 is incorporated herein. Tanaka teaches the device of claim 1, and wherein the relational feature includes a distance between the person and each of the objects (“the relationship determination unit 135 determines the distance between the person and the shelf, which is an example of a first relative relationship, and the distance between the person and the product, which is an example of a second relative relationship,” Para [0083]). Claim 7 is rejected for the same reasoning as claim 3 due to the claims reciting the same subject matter. Claim 8 is rejected for the same reasoning as claim 4 due to the claims reciting the same subject matter. Claims 5-6, 10-14, and 16-19 are method and computer-readable non-transitory recording medium claims that correspond to device claims 1-4 and 7-8. Therefore, these claims are rejected for the same reasons as claims 1-4 and 7-8. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 9, 15, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tanaka (JP-6854959-B1) as applied to claims 1 and 5 above, and further in view of Bruckschen et al., "Detection of Generic Human-Object Interactions in Video Streams", M. A. Salichs et al. (Eds.): ICSR 2019, LNAI 11876, pp. 108–118, 2019, hereinafter referred to as Bruckschen. Regarding claim 9, the rejection of claim 3 is incorporated herein. Tanaka teaches the device of claim 3, but fails to teach the following limitations as further claimed. Bruckschen, however, further teaches wherein the motion recognition model is based on a machine learning model (“our system computes for each found interaction the likelihood that it really occurs by tracking it over subsequent frames” and “Our method detects relevant objects inside each frame using regional convolutional neural networks (R-CNNs) [3] and estimates humans and their body pose using the OpenPose system,” pg. 109), and the machine learning model detects an object with a bounding box (pg. 109, Fig. 1(a), the bounding box around the coffee machine) and determines an object type (pg. 109, Fig. 1(b), “Coffee Machine” label). PNG media_image1.png 443 728 media_image1.png Greyscale Bruckschen is considered to be analogous to the claimed invention because they are both in the same field of the detection and tracking of human-object interactions in videos. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Bruckschen into Tanaka for the benefit of more accurate and real-time reviewing of the videos, especially as more data is fed into the system. Claims 15 and 20 are method and computer-readable non-transitory recording medium claims that correspond to device claim 9. Therefore, these claims are rejected for the same reason as claim 9. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Liu et al., “Detecting human–object interaction with multi-level pairwise feature network”, Computational Visual Media, 7. 1-11 10.1007/s41095-020-0188-2, October 2020, describes a method for detecting human-object interaction in an image using a pairwise feature network. Gupta et al., “Observing Human-Object Interactions: Using Spatial and Functional Compatibility for Recognition”, IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 31, No. 10, October 2009, teaches a method for detecting human-object interactions using two computational models. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL A OMETZ whose telephone number is (571)272-2535. The examiner can normally be reached 6:45am-4:00pm ET Monday-Thursday, 6:45am-1:00pm ET every other Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at 571-272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Rachel Anne Ometz/Examiner, Art Unit 2668 12/31/25 /VU LE/Supervisory Patent Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Dec 27, 2023
Application Filed
Dec 31, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602925
HYPERSPECTRAL IMAGE ANALYSIS USING MACHINE LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12555255
ABSOLUTE DEPTH ESTIMATION FROM A SINGLE IMAGE USING ONLINE DEPTH SCALE TRANSFER
2y 5m to grant Granted Feb 17, 2026
Patent 12548354
METHOD FOR PROCESSING CELL IMAGE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12541970
SYSTEM AND METHOD FOR ESTIMATING THE POSE OF A LOCALIZING APPARATUS USING REFLECTIVE LANDMARKS AND OTHER FEATURES
2y 5m to grant Granted Feb 03, 2026
Patent 12530735
IMAGE PROCESSING APPARATUS THAT IMPROVES COMPRESSION EFFICIENCY OF IMAGE DATA, METHOD OF CONTROLLING SAME, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+30.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month