Prosecution Insights
Last updated: April 19, 2026
Application No. 18/584,794

TECHNIQUES FOR TRACKING ONE OR MORE OBJECTS

Non-Final OA §102§103
Filed
Feb 22, 2024
Examiner
SETH, MANAV
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
716 granted / 789 resolved
+28.7% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
13 currently pending
Career history
802
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 789 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 1. The information disclosure statements (IDS) submitted on 03/28/2024, 06/05/2025 and 09/15/2025 have been considered by the examiner. Claim Rejections - 35 USC § 102 2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 3. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claim(s) 1, 3, 7 and 9-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jiang et al., 2015, “Combining Passive Visual Cameras and Active IMU Sensors to Track Cooperative People” (pp. 1338-1345). Regarding claim 1, Jiang discloses “A method, comprising: receiving, via a camera that is in communication with a computer system, a first set of image data representing a field of view of the camera, wherein the first set of image data at least includes data representative of an object in the field of view of the camera (page 1344 – right column – Topic – “VI. CONCLUSION” - we present a novel tracking system combining the visual and Inertial Measurement Unit (IMU) signals, obtained from surveillance cameras and IMU devices carried by the targets themselves, respectively; page 1343 – right column – 1st para – “We developed an App to collect IMU signals when the target is moving or standing. The IMU signals are transmitted back to a groundstation by GSM. Meanwhile, a stationary camera collects visual signal of the target person. The visual signal is taken at 30 frames per second; page 1342 – right column – 1st para – “visual trajectory is in the image coordinate depending on the specific camera viewpoint…We warp the visual trajectory from scene-specific viewpoint”– where visual signal at the start of the trajectory represents a first set of image data representing a field of view of the camera, wherein the first set of image data at least includes data representative of an object in the field of view of the camera; and system processing electronic signals here would inherently require inherent components such as processor and memory to process the signals); receiving a first set of data corresponding to the object via a first modality and a second set of data corresponding to the object via a second modality (page 1342 – Topic – “Integration of Visual and IMU tracking – A. Initialization – the target is tracked using visual signal and IMU signals, where visual modality is the first modality and IMU is the second modality; where visual trajectory and IMU trajectory provides location data corresponding to the object, as trajectory includes location information, as it’s defined as a sequence of location (points in space) over time, detailing an object’s movement path); after receiving the first set of data corresponding to the object and the second set of data corresponding to the object, receiving a second set of image data representing the field of view of the camera, wherein the second set of image data does not include data representative of the object; and after receiving the second set of image data, predicting a position of the object using at least the first set of data corresponding to the object and the second set of data corresponding to the object (page 1342 – Topic – “Integration of Visual and IMU tracking – C. Re-identification – “two cases lead to the re-identification: (1) The target disappears in visual tracking such as moving out of the visual field or being occluded by other objects; (2) Visual and IMU trajectories do not match each other, which may be caused by tracking drift….As shown in fig 8(i) IMU keeps tracking the target even the target is occluded by a tree. The green curve is the IMU trajectory. Meanwhile, visual pedestrian detector tries to detect pedestrians in a search region estimated by IMU (yellow circle in Fig.8(i)). If detected, the pedestrian will be tracked by visual tracking for ∆t frames (Fig.8(k)-Fig.8(m)). In this system, ∆t is set as 150 frames (5 seconds). If any visual tracking failure happens within the ∆t frames, we go back to the IMU-tracking (Fig.8(i)). If the tracking within the ∆t frames succeeds, the average distance d between IMU and visual trajectories during ∆t is computed to judge if they match. If d < dthr, the target pedestrian is re-identified and we go back to the normal tracking again. Otherwise, we go back to the IMU-tracking (Fig.8(i)) for re-identification. The above cooperative people tracking system elucidates why visual tracking and IMU tracking are "complementary". First, when visual tracking fails, IMU tracking keeps working and offers the clue where the target could be, helping visual tracking reidentify the target. Secondly, the visual trajectory corrects the bias of speed and forward direction estimation in IMU tracking by the similarity matrix Hs,k- The calibration coefficient in Eq.3 is also compted by Hs,k once we know the length of matched visual and IMU trajectories. As we keep updating Hs,k, visual tracking rebuilds the relationship with IMU tracking and rectifies the deviation of IMU-based tracking trajectory”). Regarding claim 3, Jiang discloses “The method of claim 1, wherein the first modality includes one or more selected from a group comprising a video modality, audio modality, an inertial measurement modality, and a depth modality, and wherein the second modality includes one or more selected from the group comprising the video modality, the audio modality, the inertial measurement modality, or the depth modality” (see the citations made in the rejection of claim 1). Regarding claim 7, Jiang discloses “The method of claim 1, wherein the first modality and the second modality are different types of modalities, and wherein predicting the position of the object includes: predicting a positional characteristic of the object using the first set of data and the second set of data” (As cited in the rejection of claim 1, Vision and IMU modalities are different from each other; predicting the position of the object here as cited in Jiang is nothing but estimating the trajectory of the object using vision and IMU data). Regarding claim 9, Jiang discloses “The method of claim 1, wherein the first set of data includes a first set of position data corresponding to the object, and wherein the second set of data includes a second set of position data corresponding to the object” (As cited in the rejection of claim 1, visual trajectory and IMU trajectory provides location data corresponding to the object, as trajectory includes location information, as it’s defined as a sequence of location (points in space) over time, detailing an object’s movement path). Claim 10 has been similarly analyzed and rejected as per citations made in the rejection of claim 1. Claim 11 has been similarly analyzed and rejected as per citations made in the rejection of claim 1. Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al., 2015, “Combining Passive Visual Cameras and Active IMU Sensors to Track Cooperative People” (pp. 1338-1345) as applied to claim 1 above; and, further in view of Erdem et al., 2015, “Fusing Inertial Sensor Data in an Extended Kalman Filter for 3D Camera Tracking” (pp. 538-548). Regarding claim 2, claim 2 recites “The method of claim 1, wherein the first set of data and the second set of data are inputs into a filter that includes one or more selected from a group comprising: an extended Kalman filter; an unscented Kalman filter; and a particle filter”. Jiang as cited in the rejection of claim 1, teaches fusing/combining visual and IMU signals to perform robust tracking but do not explicitly teach a filter that includes one or more selected from a group comprising: an extended Kalman filter; an unscented Kalman filter; and a particle filter to do so. However, Erdem discloses an Extended Kalman Filter that combines/fuses inputs such Inertial measurement sensor output and camera output (page 538 – left column – Abstract, and, right column – 1st and 2nd paragraph). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to use Extended Kalman Filter (EKF) to combine IMU and Camera outputs as taught by Erdem in the invention of Jiang. A person having ordinary skill in the art would have been motivated before the effective filing date of the claimed invention to use Extended Kalman Filter to combine IMU and Camera outputs as taught by Erdem in the invention of Jiang, as EKF effectively combines noisy, non-linear data from diverse sensors into a single, more accurate and reliable state estimate. 8. Claims 4-6 and 8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Manav Seth whose telephone number is (571) 272-7456. The examiner can normally be reached on Monday to Friday from 8:30 am to 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Sumati Lefkowitz, can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https:/Awww.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000 /Manav Seth/ Primary Examiner, Art Unit 2672 January 9, 2026
Read full office action

Prosecution Timeline

Feb 22, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §102, §103
Feb 20, 2026
Examiner Interview Summary
Feb 20, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597243
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12579633
PERIODIC-PATTERN BACKGROUND REMOVAL
2y 5m to grant Granted Mar 17, 2026
Patent 12567269
METHOD OF TRAINING IMAGE CAPTIONING MODEL AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12561969
Object Re-Identification Apparatus and Method Thereof
2y 5m to grant Granted Feb 24, 2026
Patent 12555368
Method for Temporal Correction of Multimodal Data
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
98%
With Interview (+7.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 789 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month