Prosecution Insights
Last updated: April 19, 2026
Application No. 18/611,698

TRACKING SYSTEM FOR MOVING BODY

Non-Final OA §103
Filed
Mar 21, 2024
Examiner
LU, ZHIYU
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
1 (Non-Final)
49%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
63%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
374 granted / 759 resolved
-12.7% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
57 currently pending
Career history
816
Total Applications
across all art units

Statute-Specific Performance

§101
2.9%
-37.1% vs TC avg
§103
66.6%
+26.6% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 759 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Divakaran et al. (US2014/0347475) in view of Kong et al. (US2023/0306489). To claim 1, Divakaran teach a tracking system for a moving body, comprising: a memory device in which video data acquired by at least two cameras is stored; and a processor configured to perform data processing based on each video data acquired by the at least two cameras (Fig. 1), wherein, in the data processing, the processor is configured to: generate a graph consisting of at least two nodes and at least one edge indicating a relationship between the at least two nodes (obvious in paragraph 0035, live video streams generated by the cameras 112, 114, 116 may be geo-registered to the 3D model scene that is displayed by the OT GUI 154 to provide direct correlation between camera capture and tracking activities in real time); and stores the generated graph in the memory device, wherein, in the generated graph, a node representing a single camera included in the at least two cameras, and a node representing a tracking identification number assigned to a moving body reflected in the image data acquired by the single camera are connected via at least one edge (paragraph 0027, local ID), wherein the tracking identification number includes a common tracking identification number assigned to the same moving object reflected in the image data acquired by the single camera (paragraph 0027, global ID), wherein, in the generated graph, nodes representing respective single cameras are connected via at least one edge representing a relationship between the at least two single cameras if there is a relationship between the at least two single cameras (Fig. 7, paragraphs 0063, 0100-0110), wherein, in the generated graph, nodes representing the at least two common tracking identification numbers are connected via at least one edge representing that the at least two moving bodies reflected in each video data captured by the at least two single cameras are the same moving object if the nodes representing the at least two common tracking identification numbers are recognized to be the same moving object (paragraphs 0027-0028, 0086, 0099). But, Divakaran do not expressly disclose generate a graph consisting of at least two nodes and at least one edge indicating a relationship between the at least two nodes. Kong teach a tracking system for a moving body, comprising: a memory device in which video data acquired by at least two cameras is stored; and a processor configured to perform data processing based on each video data acquired by the at least two cameras (Figs. 1-2), wherein, in the data processing, the processor is configured to: generate a graph consisting of at least two nodes and at least one edge indicating a relationship between the at least two nodes (abstract, Figs. 3, 7, paragraphs 0003-0004, 0044-0049, 0068, 0074-0077). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching of Kong into the system of Divakaran, in order to provide further graphical presentation to user. To claim 2, Divakaran and Kong teach claim 1. Divakaran and Kong teach wherein, in the generated graph, a node representing the common tracking identification number and a node representing additional information about the same moving object to which the common tracking identification number is assigned are connected via at least one edge, wherein, the additional information includes at least one of an image of the same moving object to which the common tracking identification number has been assigned, an appearance feature of the same moving object, an action of the same moving object, and a face image of a person if the same moving object is a person (Kong, Figs. 7, 18). To claim 3, Divakaran and Kong teach claim 1.Divakaran and Kong teach wherein, in the processing to generate the graph, the processor is configured to: determine whether the at least two moving bodies are the same moving object based on each feature quantity of the at least two moving bodies; and when it is determined that the at least two moving bodies are the same moving object, link the common tracking identification number assigned to each of these moving bodies, when the common tracking identification numbers respectively assigned to the at least two moving bodies are linked, in the generated graph, the nodes representing the common tracking identification number are connected via the at least one edge indicating that the at least two moving bodies are the same object (Divakaran, paragraphs 0027-0028, 0086, 0099; Kong, paragraphs 0056-0057). To claim 4, Divakaran and Kong teach claim 1. Divakaran and Kong teach wherein, in the processing to generate the graph, the processor is configured to: verify the determination based on each feature quantity of the at least two moving bodies; and when it is determined that there is a discrepancy in the determination, the nodes representing the common tracking identification numbers assigned to the at least two moving bodies are disconnected from each other (Kong, Fig. 2C, paragraphs 0056-0060, obvious in correct node ID error). To claim 5, Divakaran and Kong teach claim 1. Divakaran and Kong teach wherein the processor is further configured to perform tracking processing of a tracking target by referring to the generated graph with a query as its input, wherein, the query includes at least one of a date and time, a location, an image of the tracking target, and a face image of a person if the tracking target is a person (Divakaran, paragraph 0026, embodied as a searchable database or other suitable data structure configured for querying, playback, and/or other uses; wherein generated graph with a query as its input would have been well-known in the art, which would have been obvious to one of ordinary skill in the art to incorporate for input feature expansion, hence Official Notice is taken). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. ZHIYU . LU Primary Examiner Art Unit 2669 /ZHIYU LU/Primary Examiner, Art Unit 2665 January 21, 2026
Read full office action

Prosecution Timeline

Mar 21, 2024
Application Filed
Jan 21, 2026
Non-Final Rejection — §103
Mar 02, 2026
Interview Requested
Mar 10, 2026
Examiner Interview Summary
Mar 10, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601695
METHOD FOR MEASURING THE DETECTION SENSITIVITY OF AN X-RAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12597268
METHOD AND DEVICE FOR DETERMINING LANE OF TRAVELING VEHICLE BY USING ARTIFICIAL NEURAL NETWORK, AND NAVIGATION DEVICE INCLUDING SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12596187
METHOD, APPARATUS, AND SYSTEM FOR WIRELESS SENSING MEASUREMENT AND REPORTING
2y 5m to grant Granted Apr 07, 2026
Patent 12592052
INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12581142
APPROACHES FOR COMPRESSING AND DISTRIBUTING IMAGE DATA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
49%
Grant Probability
63%
With Interview (+13.9%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 759 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month