DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Divakaran et al. (US2014/0347475) in view of Kong et al. (US2023/0306489).
To claim 1, Divakaran teach a tracking system for a moving body, comprising:
a memory device in which video data acquired by at least two cameras is stored; and a processor configured to perform data processing based on each video data acquired by the at least two cameras (Fig. 1),
wherein, in the data processing, the processor is configured to:
generate a graph consisting of at least two nodes and at least one edge indicating a relationship between the at least two nodes (obvious in paragraph 0035, live video streams generated by the cameras 112, 114, 116 may be geo-registered to the 3D model scene that is displayed by the OT GUI 154 to provide direct correlation between camera capture and tracking activities in real time); and
stores the generated graph in the memory device,
wherein, in the generated graph, a node representing a single camera included in the at least two cameras, and a node representing a tracking identification number assigned to a moving body reflected in the image data acquired by the single camera are connected via at least one edge (paragraph 0027, local ID),
wherein the tracking identification number includes a common tracking identification number assigned to the same moving object reflected in the image data acquired by the single camera (paragraph 0027, global ID),
wherein, in the generated graph, nodes representing respective single cameras are connected via at least one edge representing a relationship between the at least two single cameras if there is a relationship between the at least two single cameras (Fig. 7, paragraphs 0063, 0100-0110),
wherein, in the generated graph, nodes representing the at least two common tracking identification numbers are connected via at least one edge representing that the at least two moving bodies reflected in each video data captured by the at least two single cameras are the same moving object if the nodes representing the at least two common tracking identification numbers are recognized to be the same moving object (paragraphs 0027-0028, 0086, 0099).
But, Divakaran do not expressly disclose generate a graph consisting of at least two nodes and at least one edge indicating a relationship between the at least two nodes.
Kong teach a tracking system for a moving body, comprising: a memory device in which video data acquired by at least two cameras is stored; and a processor configured to perform data processing based on each video data acquired by the at least two cameras (Figs. 1-2), wherein, in the data processing, the processor is configured to: generate a graph consisting of at least two nodes and at least one edge indicating a relationship between the at least two nodes (abstract, Figs. 3, 7, paragraphs 0003-0004, 0044-0049, 0068, 0074-0077).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching of Kong into the system of Divakaran, in order to provide further graphical presentation to user.
To claim 2, Divakaran and Kong teach claim 1.
Divakaran and Kong teach wherein, in the generated graph, a node representing the common tracking identification number and a node representing additional information about the same moving object to which the common tracking identification number is assigned are connected via at least one edge, wherein, the additional information includes at least one of an image of the same moving object to which the common tracking identification number has been assigned, an appearance feature of the same moving object, an action of the same moving object, and a face image of a person if the same moving object is a person (Kong, Figs. 7, 18).
To claim 3, Divakaran and Kong teach claim 1.Divakaran and Kong teach wherein, in the processing to generate the graph, the processor is configured to: determine whether the at least two moving bodies are the same moving object based on each feature quantity of the at least two moving bodies; and when it is determined that the at least two moving bodies are the same moving object, link the common tracking identification number assigned to each of these moving bodies, when the common tracking identification numbers respectively assigned to the at least two moving bodies are linked, in the generated graph, the nodes representing the common tracking identification number are connected via the at least one edge indicating that the at least two moving bodies are the same object (Divakaran, paragraphs 0027-0028, 0086, 0099; Kong, paragraphs 0056-0057).
To claim 4, Divakaran and Kong teach claim 1.
Divakaran and Kong teach wherein, in the processing to generate the graph, the processor is configured to: verify the determination based on each feature quantity of the at least two moving bodies; and when it is determined that there is a discrepancy in the determination, the nodes representing the common tracking identification numbers assigned to the at least two moving bodies are disconnected from each other (Kong, Fig. 2C, paragraphs 0056-0060, obvious in correct node ID error).
To claim 5, Divakaran and Kong teach claim 1.
Divakaran and Kong teach wherein the processor is further configured to perform tracking processing of a tracking target by referring to the generated graph with a query as its input,
wherein, the query includes at least one of a date and time, a location, an image of the tracking target, and a face image of a person if the tracking target is a person (Divakaran, paragraph 0026, embodied as a searchable database or other suitable data structure configured for querying, playback, and/or other uses; wherein generated graph with a query as its input would have been well-known in the art, which would have been obvious to one of ordinary skill in the art to incorporate for input feature expansion, hence Official Notice is taken).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ZHIYU . LU
Primary Examiner
Art Unit 2669
/ZHIYU LU/Primary Examiner, Art Unit 2665 January 21, 2026