Prosecution Insights
Last updated: April 19, 2026
Application No. 18/618,134

APPARATUS FOR IDENTIFYING ITEMS, METHOD FOR IDENTIFYING ITEMS AND ELECTRONIC DEVICE

Non-Final OA §102§103§112
Filed
Mar 27, 2024
Examiner
DEPALMA, CAROLINE ELIZABETH
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
37 granted / 42 resolved
+26.1% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
16 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
29.9%
-10.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
26.7%
-13.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 42 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Detector (recited in claim 1) Tracker (recited in claim 1, 3, 5, 11) Classifier (recited in claim 1) Pre-processors (recited in claim 2) Post-processor (recited in claim 4) Synthesizer (recited in claim 6, 7, 8) Cropper (recited in claim 6) Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 11-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 11 recites the limitation "the tracker" in (pg. 2, line 20). There is insufficient antecedent basis for this limitation in the claim. Dependent claims 12-15 are similarly rejected. Claim 16 recites the limitation "the circuitry" in (pg. 3, line 10). There is insufficient antecedent basis for this limitation in the claim. Dependent claims 17-18 are similarly rejected. Claim 17 recites the limitation “the circuitry” in (pg. 3, line 14). There is insufficient antecedent basis for this limitation in the claim. Claim 18 recites the limitation “the circuitry” in (pg. 3, line 20). There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, 4, 9-12, 14 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhang (Zhang, Yicheng, and Qiang Ling. "Bicycle detection based on multi-feature and multi-frame fusion in low-resolution traffic videos." arXiv preprint arXiv:1706.03309 (2017).). Regarding claim 1, Zhang discloses an apparatus for identifying items (Col. 2, para. 4: system for detecting bicycles in images), characterized in that the apparatus comprises: a detector configured to detect one or more items in a reference area in one or more image frames in video data (Fig. 2 (steps 1, 3, 4); Col. 8, steps 2-4: the foreground of a video frame is separated from the background (i.e. reference area) and objects are detected); a tracker configured to track an item detected in multiple image frames, wherein multi-hierarchy decision is performed on the item in the multiple image frames by using different time windows (Col. 6, section A (e.g. equation 6), col. 8-9 steps 8-9: tracking an object in the video frames, wherein an object is detected (or determined to not be present) in individual frames and then an overall detection result is made based on if the object was detected in a majority of the frames (i.e. multi-hierarchy decision using different time windows, the time windows being 1 frame and multiple frames); and a classifier configured to identify the item according to a decision result of the tracker (Col. 6, section A: the output of the multi-hierarchy decision is a classification of the object (e.g. as a bicycle)). Regarding claim 2, Zhang discloses the apparatus according to claim 1 as applied above. Zhang further discloses wherein the apparatus further comprises: a pre-processor configured to preprocess the image frames in the video data; wherein at least a part of outer edge areas of the detected item are segmented and removed, and the removed areas are filled with the reference area (Fig. 2, Col. 8, step 3: the image frames are processed prior to object detection, wherein erosion may be performed (i.e. segment and remove parts of edges of objects and replace with surrounding area)). Regarding claim 4, Zhang discloses the apparatus according to claim 1 as applied above. Zhang further discloses wherein the apparatus further comprises: a post-processor configured to perform at least one piece of the following post-processing on the tracking result: deleting a tracklet with a track length less than a preset threshold, deleting a tracklet classified as a background, splitting a tracklet, or merging multiple tracklets with identical identifiers into one tracklet (Col. 7, para. 2-3: post-processing is performed after the tracking result is determined, wherein detected objects (i.e. tracklets) are merged when a number of frames between them where the object is not detected is below a threshold (i.e. merged when it is determined that an identical object has been detected in both)). Regarding claim 9, Zhang discloses everything claimed as applied above (see rejection of claim 1). Regarding claim 10, Zhang discloses the method for identifying items as claimed in claim 9 as applied above. Zhang further discloses an electronic device, comprising a memory and a processor, the memory storing a computer program, characterized in that the processor is configured to execute the computer program (Col. 9, section VI, para. 1: the method is performed on a personal computer (i.e. electronic device including a memory storing computer programs) and including a processor). Regarding claim 11, Zhang discloses everything claimed as applied above (see rejection of claim 1) with the addition of an apparatus for identifying items, the apparatus comprising: circuitry (Col. 9, section VI, para. 1: the method is performed on a generic personal computer (i.e. including circuitry); Col. 2, para. 4: system for detecting bicycles in images). Regarding claim 12, Zhang discloses everything claimed as applied above (see rejection of claim 2). Regarding claim 14, Zhang discloses everything claimed as applied above (see rejection of claim 4). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 3, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (Zhang, Yicheng, and Qiang Ling. "Bicycle detection based on multi-feature and multi-frame fusion in low-resolution traffic videos." arXiv preprint arXiv:1706.03309 (2017).) in view of Keivani (A. Keivani, J. -R. Tapamo and F. Ghayoor, "Motion-based moving object detection and tracking using automatic K-means," 2017 IEEE AFRICON, Cape Town, South Africa, 2017, pp. 32-37, doi: 10.1109/AFRCON.2017.8095451.). Regarding claim 3, Zhang discloses the apparatus according to claim 1 as applied above. Zhang fails to disclose wherein the tracker maintains a dynamic surface feature sequence for a tracklet, a distance between any two features in the surface feature sequence being greater than a preset threshold. Keivani, in a related system from the same field of endeavor of object detection and tracking across frames based on feature vectors (Abstract), discloses wherein the tracker maintains a dynamic surface feature sequence for a tracklet, a distance between any two features in the surface feature sequence being greater than a preset threshold (Fig. 2, 3, 4; section II.D. calculating the distance (e.g. magnitude of motion) between features in image frames for an object of interest (i.e. tracklet); section II.E. determining if the distance between the features exceeds a threshold, and only including the features which exceed the threshold as being features of an object interest (i.e. only features that meet the threshold are included in the tracklet's feature sequence)). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Keivani with Zhang and maintain a dynamic surface feature sequence for a tracklet based on distances being greater than a threshold, as disclosed by Keivani, as part of an apparatus for identifying items, as disclosed by Zhang, for the purpose of accurate and efficient object detection and tracking, such as for use in surveillance and activity recognition (See Keivani: Abstract, Conclusion). Regarding claim 13, Zhang in view of Keivani discloses everything claimed as applied above (see rejection of claim 3). Claim(s) 6-8, 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (Zhang, Yicheng, and Qiang Ling. "Bicycle detection based on multi-feature and multi-frame fusion in low-resolution traffic videos." arXiv preprint arXiv:1706.03309 (2017).) in view of Rangarajan (US 20220405587 A1). Regarding claim 6, Zhang discloses the apparatus according to claim 1 as applied above. Zhang fails to disclose wherein the apparatus further comprises: a synthesizer configured to perform image synthesis on one or more items and the reference area; and a cropper configured to crop the synthesized image to form one or more detection samples for use in training. Rangarajan, in a related system from the same field of endeavor of generating images for training computer vision tasks such as object detection (Abstract, [0026], discloses wherein the apparatus further comprises: a synthesizer configured to perform image synthesis on one or more items and the reference area; and a cropper configured to crop the synthesized image to form one or more detection samples for use in training (Fig. 1, 2, [0144] user interface for generating (i.e. synthesizing) synthetic training data for a computer vision system; [0151] wherein the generated training image includes a background image (i.e. reference area) and an item (e.g. car); Fig. 3-4, [0155], [0164] and wherein the image may be cropped at different crop levels to form different versions of the image (see also [0079])). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Rangarajan with Zhang and perform image synthesis and image cropping to generate samples for use in training, as disclosed by Rangarajan, as part of an apparatus for identifying items, as disclosed by Zhang, for the purpose of quickly and easily generating training data for performing computer vision tasks, which reduces workload and expands accessibility of complex computing systems (see Rangarajan, [0010]-[0012], [0022]-[0024]). Regarding claim 7, Zhang in view of Rangarajan discloses the apparatus according to claim 6 as applied above. Zhang fails to disclose wherein the synthesizer performs the image synthesis according to at least one of the following parameters: the number of items in the reference area, a degree of overlap or occlusion ratio of the items, or a scaling ratio of the items. Rangarajan, in a related system from the same field of endeavor of generating images for training computer vision tasks such as object detection (Abstract, [0026], discloses wherein the synthesizer performs the image synthesis according to at least one of the following parameters: the number of items in the reference area, a degree of overlap or occlusion ratio of the items, or a scaling ratio of the items (Fig. 3, 4, [0084], [0155] image synthesis is based on occlusion percentage of the object, addition of 'distraction objects' to the background (i.e. based on altering the number of items in the reference area, see also Fig. 5), and/or adjusting the zoom and field of view of the object with respect to the background area (i.e. a scaling ratio of the items to the reference area)). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Rangarajan with Zhang and perform image synthesis based on parameters such as number of items, occlusion ratio, or scaling ratio to generate samples for use in training, as disclosed by Rangarajan, as part of an apparatus for identifying items, as disclosed by Zhang, for the purpose of quickly and easily generating training data for performing computer vision tasks, which reduces workload and expands accessibility of complex computing systems (see Rangarajan, [0010]-[0012], [0022]-[0024]). Regarding claim 8, Zhang in view of Rangarajan discloses the apparatus according to claim 6 as applied above. Zhang fails to disclose wherein the synthesizer performs at least one piece of the following processing on the one or more items: increasing or decreasing image brightness, increasing or decreasing a degree of overlap, changing shooting perspectives of the items, or enhancing texture features of the items. Rangarajan, in a related system from the same field of endeavor of generating images for training computer vision tasks such as object detection (Abstract, [0026], discloses wherein the synthesizer performs at least one piece of the following processing on the one or more items: increasing or decreasing image brightness, increasing or decreasing a degree of overlap, changing shooting perspectives of the items, or enhancing texture features of the items (Fig. 3-4, [0166] altering the brightness in regions of the image to generate varied training images; Fig. 3-4, [0169]-[0171] a view angle (i.e. shooting perspective) of the item is changed to generate different training images). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Rangarajan with Zhang and perform image synthesis including processing such as changing image brightness and changing shooting perspectives of items, as disclosed by Rangarajan, as part of an apparatus for identifying items, as disclosed by Zhang, for the purpose of quickly and easily generating training data for performing computer vision tasks, which reduces workload and expands accessibility of complex computing systems (see Rangarajan, [0010]-[0012], [0022]-[0024]). Regarding claim 16, Zhang in view of Rangarajan discloses everything claimed as applied above (see rejection of claim 6). Regarding claim 17, Zhang in view of Rangarajan discloses everything claimed as applied above (see rejection of claim 7). Regarding claim 18, Zhang in view of Rangarajan discloses everything claimed as applied above (see rejection of claim 8). Allowable Subject Matter Claim 5 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 15 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, and the rejection(s) under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 5, Zhang discloses the apparatus according to claim 1 as applied above. However, Zhang fails to disclose wherein the tracker processes a center and proportion of a tracklet by using separate Kalman filters, wherein linear Kalman filtering is performed on the center of the tracklet, and nonlinear Kalman filtering is performed on the proportion of the tracklet. Similar reasoning applies to claim 15. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Srivastava (US 20190005343) discloses automated self-checkout of items in a store in which a vote/consensus system (i.e. multi-hierarchy decision) is used to compare item detection in individual frames to track an object over a larger time period. Zou (US 20210397844) discloses multi-hierarchy decision (e.g. capturing multiple frames and performing object detection and fusing the results to output an object detection result) for tracking an item. Ohrn (US 20230360360 A1) discloses tracking objects in video data including identifying features and weighting features to influence object re-identification based on deviation threshold. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLINE DEPALMA whose telephone number is (571)270-0769. The examiner can normally be reached Mon-Thurs 9:00am-4pm Eastern Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 571-272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAROLINE E. DEPALMA/Examiner, Art Unit 2675 /ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Feb 23, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602777
APPARATUS AND METHOD FOR QUANTITATIVE ASSESSMENT OF MEDICAL IMAGES FOR DIAGNOSIS OF CHRONIC OBSTRUCTIVE PULMONARY DISEASE
2y 5m to grant Granted Apr 14, 2026
Patent 12586409
DETECTING EMOTIONAL STATE OF A USER BASED ON FACIAL APPEARANCE AND VISUAL PERCEPTION INFORMATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586246
SYSTEM AND METHOD FOR VICARIOUS CALIBRATION OF OPTICAL DATA FROM SATELLITE SENSORS
2y 5m to grant Granted Mar 24, 2026
Patent 12573046
METHODS AND SYSTEMS FOR ANALYZING BRAIN LESIONS FOR THE DIAGNOSIS OF MULTIPLE SCLEROSIS
2y 5m to grant Granted Mar 10, 2026
Patent 12567226
METHOD AND DEVICE OF ACQUIRING FEATURE INFORMATION OF DETECTED OBJECT, APPARATUS AND MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+15.6%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 42 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month