Prosecution Insights
Last updated: April 19, 2026
Application No. 18/518,605

VIDEO MANAGEMENT APPARATUS, VIDEO MANAGEMENT METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM

Final Rejection §102§103
Filed
Nov 24, 2023
Examiner
ALSOMAIRY, IBRAHIM ABDOALATIF
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
2 (Final)
40%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
49%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
33 granted / 82 resolved
-11.8% vs TC avg
Moderate +8% lift
Without
With
+8.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
43 currently pending
Career history
125
Total Applications
across all art units

Statute-Specific Performance

§101
14.7%
-25.3% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 82 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a Final Action on the Merits. Claims 1-20 are currently pending and are addressed below. Information Disclosure Statement The information disclosure statement (IDS) submitted on December 22nd, 2025 has been considered and entered. Response to Amendments The amendment filed on December 22nd, 2025 has been considered and entered. Accordingly claims 1, 8, and 15 have been amended. Response to Arguments The previous rejection of claims 1-20 under 35 USC 101 have been overcome due to the applicant’s amendments. The applicant’s arguments with respect to claims 1-20 have been considered but are moot in view of the newly formulated grounds of rejection necessitated by the applicant’s amendments. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “a controller configured to display” in at least claim 1 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The published specification describes corresponding structure for the claim limitation in paragraph 41. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 8, and 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Amento (US 20180343488 A1) (“Amento”). With respect to claim 1, Amento teaches a video management apparatus capable of communicating with a terminal apparatus, the video management apparatus comprising: a controller configured to: display, an icon visualizing a shooting position and a shooting direction of a video captured by each vehicle on a map, within a predetermined area including a designated point on the map displayed on a screen of the terminal apparatus (See at least Amento FIGS. 4-5D and Paragraph 2 “ In various embodiments, the functionality of a mobile computing node can be provided by a vehicle, a drone, an airplane, other types of vehicles, and/or computing systems thereof” | Paragraph 56 “As explained above with reference to FIG. 1 with reference to the video request 118, the request received in operation 202 can include data that specifies one or more user, vehicle, or other entity; one or more filters; one or more requirements; one or more geographic locations; and/or other parameters or the like associated with the video or images that are being requested by way of the request received in operation 202. Thus, for example, the request received in operation 202 can specific a particular user, a particular location, a particular vehicle, or the like” | Paragraphs 73-77 “The method 400 begins at operation 402. At operation 402, the computing device 120 can send a request for video. In some embodiments, the request sent in operation 402 can be sent to the server computer 102, though this is not necessarily the case. The request sent in operation 402 can be similar or even identical to the video request 118 illustrated and described above with reference to FIG. 1 … From operation 402, the method 400 can proceed to operation 404. At operation 404, the computing device 120 can receive a list of mobile computing nodes 108 such as, for example, the list 122 illustrated and described herein with reference to FIG. 1. In some embodiments, the list received in operation 404 can be received from the server computer 102, though this is not necessarily the case … As explained above, the list 122 received in operation 404 can include data that specifies one or more mobile computing nodes 108. In some other embodiments, the list 122 received in operation 404 can include a map display that shows the available mobile computing nodes 108 (that satisfy the requirements associated with the request). Thus, in some embodiments, operation 406 can correspond to the computing device 120 presenting a user interface via which a user or other entity can select one of multiple available mobile computing nodes 108. Some example UIs for presenting the list 122 are illustrated and described below with reference to FIGS. 5A-5D. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation 406, the method 400 can proceed to operation 408. At operation 408, the computing device 120 can obtain and send a selection such as the selection 124. The selection 124 obtained and/or provided in operation 406 can indicate a mobile computing node 108, from the list 122 of mobile computing nodes 108, that the requestor wants to obtain content (e.g., streaming video) from. Also, it should be understood that a user or other entity may specify more than one mobile computing node 108, e.g., a backup mobile computing node 108 can be selected in case a first or primary mobile computing node 108 becomes unavailable, moves out of a specified location, loses connectivity, or the like. It should be understood that multiple selections 124 may be provided, in some embodiments.” | Paragraph 93 “More particularly, the illustrated map display 522 shows three mobile computing nodes 108, as shown by the associated mobile computing node location and bearing indicators 524. As shown, the mobile computing node location and bearing indicators 524 have associated thumbnails 526. The thumbnails 526 show a recent or current image associated with a capture device of the mobile computing nodes 108 associated with the mobile computing node location and bearing indicators 524. Thus, with reference to the map display 522, a user or other entity can ascertain the geographic location of a particular mobile computing node 108, a bearing of the mobile computing node 108, and a view or image associated with a capture device of the mobile computing node 108. This information can assist a user or other entity in selecting which mobile computing node 108 is to stream video to the computing device 120 or other device used to obtain the streaming video. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way.”), provide a user interface configured to allow a user to select any icon based on the displayed position and orientation (See at least Amento FIGS> 4-5D and Paragraph 92 “The map display 522 can be configured to enable a user or other entity to select a mobile computing node 108 from a list 122 of mobile computing nodes 108, which in the illustrated embodiment is provided within the map display 522.”)., acquire, upon any icon being selected from among one or more icons displayed on the map via the user interface, a video captured by the vehicle corresponding to the selected icon, and autonomously play back the acquired video associated with a selected icon in response to the selection of the icon (See at least Amento FIGS. 4-5D and Paragraph 95 “As noted above, the screen display 500C also can include other options such as a UI control 528. Selection of the UI control 528 can cause the computing device 120 to obtain streaming video from a selected mobile computing node 108 (e.g., via selection of one of the mobile computing node location and bearing indicators 524). An example view of the streaming video is shown in FIG. 5D. Because additional or alternative controls can be included in the screen display 500C, it should be understood that the example embodiment shown in FIG. 5C is illustrative and therefore should not be construed as being limiting in any way”). With respect to claim 8, Amento teaches a video management method to be executed by a video management apparatus capable of communicating with a terminal apparatus, the video management method comprising: display, an icon visualizing a shooting position and a shooting direction of a video captured by each vehicle on a map, within a predetermined area including a designated point on the map displayed on a screen of the terminal apparatus (See at least Amento FIGS. 4-5D and Paragraph 2 “ In various embodiments, the functionality of a mobile computing node can be provided by a vehicle, a drone, an airplane, other types of vehicles, and/or computing systems thereof” | Paragraph 56 “As explained above with reference to FIG. 1 with reference to the video request 118, the request received in operation 202 can include data that specifies one or more user, vehicle, or other entity; one or more filters; one or more requirements; one or more geographic locations; and/or other parameters or the like associated with the video or images that are being requested by way of the request received in operation 202. Thus, for example, the request received in operation 202 can specific a particular user, a particular location, a particular vehicle, or the like” | Paragraphs 73-77 “The method 400 begins at operation 402. At operation 402, the computing device 120 can send a request for video. In some embodiments, the request sent in operation 402 can be sent to the server computer 102, though this is not necessarily the case. The request sent in operation 402 can be similar or even identical to the video request 118 illustrated and described above with reference to FIG. 1 … From operation 402, the method 400 can proceed to operation 404. At operation 404, the computing device 120 can receive a list of mobile computing nodes 108 such as, for example, the list 122 illustrated and described herein with reference to FIG. 1. In some embodiments, the list received in operation 404 can be received from the server computer 102, though this is not necessarily the case … As explained above, the list 122 received in operation 404 can include data that specifies one or more mobile computing nodes 108. In some other embodiments, the list 122 received in operation 404 can include a map display that shows the available mobile computing nodes 108 (that satisfy the requirements associated with the request). Thus, in some embodiments, operation 406 can correspond to the computing device 120 presenting a user interface via which a user or other entity can select one of multiple available mobile computing nodes 108. Some example UIs for presenting the list 122 are illustrated and described below with reference to FIGS. 5A-5D. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation 406, the method 400 can proceed to operation 408. At operation 408, the computing device 120 can obtain and send a selection such as the selection 124. The selection 124 obtained and/or provided in operation 406 can indicate a mobile computing node 108, from the list 122 of mobile computing nodes 108, that the requestor wants to obtain content (e.g., streaming video) from. Also, it should be understood that a user or other entity may specify more than one mobile computing node 108, e.g., a backup mobile computing node 108 can be selected in case a first or primary mobile computing node 108 becomes unavailable, moves out of a specified location, loses connectivity, or the like. It should be understood that multiple selections 124 may be provided, in some embodiments.” | Paragraph 93 “More particularly, the illustrated map display 522 shows three mobile computing nodes 108, as shown by the associated mobile computing node location and bearing indicators 524. As shown, the mobile computing node location and bearing indicators 524 have associated thumbnails 526. The thumbnails 526 show a recent or current image associated with a capture device of the mobile computing nodes 108 associated with the mobile computing node location and bearing indicators 524. Thus, with reference to the map display 522, a user or other entity can ascertain the geographic location of a particular mobile computing node 108, a bearing of the mobile computing node 108, and a view or image associated with a capture device of the mobile computing node 108. This information can assist a user or other entity in selecting which mobile computing node 108 is to stream video to the computing device 120 or other device used to obtain the streaming video. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way.”), provide a user interface configured to allow a user to select any icon based on the displayed position and orientation (See at least Amento FIGS> 4-5D and Paragraph 92 “The map display 522 can be configured to enable a user or other entity to select a mobile computing node 108 from a list 122 of mobile computing nodes 108, which in the illustrated embodiment is provided within the map display 522.”)., acquire, upon any icon being selected from among one or more icons displayed on the map via the user interface, a video captured by the vehicle corresponding to the selected icon, and autonomously play back the acquired video associated with a selected icon in response to the selection of the icon (See at least Amento FIGS. 4-5D and Paragraph 95 “As noted above, the screen display 500C also can include other options such as a UI control 528. Selection of the UI control 528 can cause the computing device 120 to obtain streaming video from a selected mobile computing node 108 (e.g., via selection of one of the mobile computing node location and bearing indicators 524). An example view of the streaming video is shown in FIG. 5D. Because additional or alternative controls can be included in the screen display 500C, it should be understood that the example embodiment shown in FIG. 5C is illustrative and therefore should not be construed as being limiting in any way”). With respect to claim 15, Amento teaches a non-transitory computer readable medium storing a program configured to cause a video management apparatus capable of communicating with a terminal apparatus to execute operations, the operations comprising: display, an icon visualizing a shooting position and a shooting direction of a video captured by each vehicle on a map, within a predetermined area including a designated point on the map displayed on a screen of the terminal apparatus (See at least Amento FIGS. 4-5D and Paragraph 2 “ In various embodiments, the functionality of a mobile computing node can be provided by a vehicle, a drone, an airplane, other types of vehicles, and/or computing systems thereof” | Paragraph 56 “As explained above with reference to FIG. 1 with reference to the video request 118, the request received in operation 202 can include data that specifies one or more user, vehicle, or other entity; one or more filters; one or more requirements; one or more geographic locations; and/or other parameters or the like associated with the video or images that are being requested by way of the request received in operation 202. Thus, for example, the request received in operation 202 can specific a particular user, a particular location, a particular vehicle, or the like” | Paragraphs 73-77 “The method 400 begins at operation 402. At operation 402, the computing device 120 can send a request for video. In some embodiments, the request sent in operation 402 can be sent to the server computer 102, though this is not necessarily the case. The request sent in operation 402 can be similar or even identical to the video request 118 illustrated and described above with reference to FIG. 1 … From operation 402, the method 400 can proceed to operation 404. At operation 404, the computing device 120 can receive a list of mobile computing nodes 108 such as, for example, the list 122 illustrated and described herein with reference to FIG. 1. In some embodiments, the list received in operation 404 can be received from the server computer 102, though this is not necessarily the case … As explained above, the list 122 received in operation 404 can include data that specifies one or more mobile computing nodes 108. In some other embodiments, the list 122 received in operation 404 can include a map display that shows the available mobile computing nodes 108 (that satisfy the requirements associated with the request). Thus, in some embodiments, operation 406 can correspond to the computing device 120 presenting a user interface via which a user or other entity can select one of multiple available mobile computing nodes 108. Some example UIs for presenting the list 122 are illustrated and described below with reference to FIGS. 5A-5D. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation 406, the method 400 can proceed to operation 408. At operation 408, the computing device 120 can obtain and send a selection such as the selection 124. The selection 124 obtained and/or provided in operation 406 can indicate a mobile computing node 108, from the list 122 of mobile computing nodes 108, that the requestor wants to obtain content (e.g., streaming video) from. Also, it should be understood that a user or other entity may specify more than one mobile computing node 108, e.g., a backup mobile computing node 108 can be selected in case a first or primary mobile computing node 108 becomes unavailable, moves out of a specified location, loses connectivity, or the like. It should be understood that multiple selections 124 may be provided, in some embodiments.” | Paragraph 93 “More particularly, the illustrated map display 522 shows three mobile computing nodes 108, as shown by the associated mobile computing node location and bearing indicators 524. As shown, the mobile computing node location and bearing indicators 524 have associated thumbnails 526. The thumbnails 526 show a recent or current image associated with a capture device of the mobile computing nodes 108 associated with the mobile computing node location and bearing indicators 524. Thus, with reference to the map display 522, a user or other entity can ascertain the geographic location of a particular mobile computing node 108, a bearing of the mobile computing node 108, and a view or image associated with a capture device of the mobile computing node 108. This information can assist a user or other entity in selecting which mobile computing node 108 is to stream video to the computing device 120 or other device used to obtain the streaming video. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way.”), provide a user interface configured to allow a user to select any icon based on the displayed position and orientation (See at least Amento FIGS. 4-5D and Paragraph 92 “The map display 522 can be configured to enable a user or other entity to select a mobile computing node 108 from a list 122 of mobile computing nodes 108, which in the illustrated embodiment is provided within the map display 522.”)., acquire, upon any icon being selected from among one or more icons displayed on the map via the user interface, a video captured by the vehicle corresponding to the selected icon, and autonomously play back the acquired video associated with a selected icon in response to the selection of the icon (See at least Amento FIGS. 4-5D and Paragraph 95 “As noted above, the screen display 500C also can include other options such as a UI control 528. Selection of the UI control 528 can cause the computing device 120 to obtain streaming video from a selected mobile computing node 108 (e.g., via selection of one of the mobile computing node location and bearing indicators 524). An example view of the streaming video is shown in FIG. 5D. Because additional or alternative controls can be included in the screen display 500C, it should be understood that the example embodiment shown in FIG. 5C is illustrative and therefore should not be construed as being limiting in any way”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2-3, 9-10, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Amento (US 20180343488 A1) (“Amento”) in view of Mutsumi (JP 2019215638 A) (“Mutsumi”) (Translation Attached). With respect to claim 2, and similarly claims 9 and 16, Amento teaches that the controller is configured to display, on the map, the icon of each vehicle that stores a video in the predetermined area (See at least Amento FIGS. 4-5D and Paragraph 92-95). Amento, however, fails to explicitly disclose that upon a target period being designated, the controller is configured to display, on the map, the icon of each vehicle that stores a video captured during the target period in the predetermined area. Mutsumi, however teaches a target period being designated (See at least Mutsumi FIG. 5 and Paragraphs 58-62 “FIG. 5 schematically illustrates an example of a flow of a process performed by the control device 200. FIG. 5 shows an example of processing from receiving the designation of the imaging target point to displaying the captured image. Each process illustrated in FIG. 5 may be executed mainly by a control unit included in control device 200. In step (step may be abbreviated as S) 102, when operation unit 110 receives designation of an imaging target point, target point acquisition unit 202 acquires the specified imaging target point. In S104, the request information transmitting unit 204 broadcasts request information including the identification information of the own vehicle to the other vehicles 100. When the response receiving unit 206 receives the first response information within a predetermined time after the transmission of the request information in response to the request information broadcasted in S104 (YES in S106), the process proceeds to S120, and when the first response information is not received (S106). NO), and proceed to S108. In S108, if the response receiving unit 206 receives the second response information within a predetermined time from the transmission of the request information (YES in S108), the process proceeds to S114. If not (NO in S108), the process proceeds to S110. In S110, the request information transmitting unit 208 transmits the request information to the vehicle management server 300, and the captured image receiving unit 210 receives the captured image of the imaging target point from the vehicle management server 300. In S112, the display control unit 212 causes the display unit 120 to display the captured image. In S114, the request information transmitting unit 208 transmits the request information to the vehicle 100 that has transmitted the second response information, and the captured image receiving unit 210 receives the captured image of the imaging target point from the vehicle 100. In S118, the display control unit 212 causes the display unit 120 to display the captured image.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus of Amento to include a target period being designated, as taught by Mutsumi as disclosed above, such that the controller is configured to display, on the map, the icon of each vehicle that stores a video captured in a selected target period in the predetermined area, in order to ensure an accurate display of an area (Mutsumi Abstract “To desirably achieve a technology that, when providing a picked-up image of an imaging target spot, can preferentially provide a newer picked-up image.”). With respect to claim 3, and similarly claims 10 and 17, Amento in view of Mutsumi teach that the controller is configured to display, on the screen of the terminal apparatus, information indicating a time at which the video captured in the predetermined area was captured for each vehicle that stores the video (See at least Mutsumi Paragraph 5 “The captured image receiving unit may be configured to determine whether there is no vehicle capable of capturing the image capturing target point, and if the vehicle cannot store the captured image capturing the captured image point, the captured image may not be received. The captured image of the imaging target point may be received from the image management server that manages the captured image captured by the vehicle. The display control unit may display imaging time information indicating a time at which the captured image was captured, corresponding to the captured image”). Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Amento (US 20180343488 A1) (“Amento”) in view of Mutsumi (JP 2019215638 A) (“Mutsumi”) (Translation Attached) further in view of Tagami (JP 5922397 B2) (“Tagami”) (Translation Attached). With respect to claim 4, and similarly claims 11 and 18, Amento in view of Mutsumi teach that the controller is configured to display, on the map, one or more vehicles (See at least Amento FIGS. 4-5D and Paragraph 92-95). Amento in view of Mutsumi fail to explicitly disclose displaying, on the map, the icon indicating the position and the orientation of one or more vehicles that captured the video in the predetermined area during the target period by changing a color of the icon for different times. Tagami teaches displaying, on the map, the icon indicating the position and the orientation of one or more vehicles that captured the video in the predetermined area during the target period by changing a color of the icon for different times (See at least Tagami Paragraph 8 “ A program for displaying, the computer forming a video reproduction display area for displaying the reproduction video based on the video data and controlling a display state of the reproduction video according to a video reproduction operation. Means, reflecting the position of the spot on the traveling path shown in the reproduced video and the distance between the plurality of spots, according to a reference designating operation of directly or indirectly instructing the display position and the display interval on the reproduced video at least Position reference setting means for setting a position reference for setting the display position and the display interval on the reproduced video, and the front position based on the position reference A traveling image display program for operating the position indicator display means for displaying the position index on the display position on the reproduction image distance reflecting between the positions and the plurality of points of the point as. Here, the travel path refers to a road or a track on which a traveling vehicle travels. Moreover, the said point means the part (part) of the traveling path reflected in the imaging | video of the exterior of a traveling vehicle at least among the said reproduction | regeneration imaging | video” | Paragraph 17 “In this case, from the origin point on the travel path corresponding to the origin position on the reproduced video included in the position reference, the position index is a specific point on the travel path corresponding to the specific display position of the position index. It is preferable to accompany the distance display which shows the distance on the said reproduction | regeneration imaging | video up to. According to this, since the position index is accompanied by the distance display, the distance from the origin point at the specific point can be immediately grasped on the reproduced video. Here, the distance display is not limited to the case where the numerical value indicating the distance itself is displayed, as long as the magnitude of the distance can be recognized by the user as a result in some manner. The color or shape of the position index may be changed accordingly. Even in the case where the position index does not accompany the distance display, a sufficient effect can be obtained by, for example, a method in which a plurality of position indexes are arranged at equal intervals to form a scale-like display mode as a whole” | Paragraph 82 “In this embodiment, as shown in FIG. 39, position reference setting means P1i which sets a position reference according to a reference designation operation, and position index display means P1j which displays a position index superimposed on a reproduced video according to the position reference. And have. The position reference includes the origin position, the reference distance, a scale condition determined by the control position, and a shooting position related to the position and orientation of the camera, a shooting direction, and a shooting condition determined by the shooting angle of view.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus of Amento in view of Mutsumi to include displaying, on the map, the icon indicating the position and the orientation of one or more vehicles that captured the video in the predetermined area during the target period by changing a color of the icon for different times, as taught by Tagami as disclosed above, in order to ensure accurate and easy to follow video playback (Tagami Paragraph 10 “As the above-mentioned position indicator, various indicators useful for grasping the display position and the display interval on the reproduced video as a result are widely included.”). Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Amento (US 20180343488 A1) (“Amento”) in view of Nishio (US 20040145663 A1) (“Nishio”). With respect to claim 5, and similarly claims 12 and 19, Amento fails to explicitly disclose that the controller is configured to store, as a history, a video that has been played back a predetermined number of times. Nishio teaches that the controller is configured to store, as a history, a video that has been played back a predetermined number of times (See at least Nishio FIG. 5 and Paragraph 70 “The “degree standard settings” is a standard for changing the information indicating the image data on the image selection screen, and can be selected from among “time”, “frequency” and “time+frequency”. When “time” is selected, a length of shooting time and a length of reproduction (playback) time of the image data are determined as a degree standard and when “frequency” is selected, an amount of image data shot from the same position as a camera position or shot in the same object position, namely, shooting frequency and reproduction (playback) frequency of the image data are determined as a degree standard”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus of Amento to include that the controller is configured to store, as a history, a video that has been played back a predetermined number of times, as taught by Nishio as disclosed above, in order to ensure accurate records of video playback (Nishio Paragraph 8 “The first object of the present invention, conceived in view of above problems, is to provide an image reproducing device for facilitating a selection of an image.”). Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Amento (US 20180343488 A1) (“Amento”) in view of Hubbell (US 12071075 B1) (“Hubbell”). With respect to claim 6, and similarly claims 13 and 20, Amento teach that upon an icon being selected on the map, the controller is configured to acquire video corresponding to the icon and play back the acquired video on the screen (See at least Mutsumi FIG. 5 and Paragraphs 58-62 “FIG. 5 schematically illustrates an example of a flow of a process performed by the control device 200. FIG. 5 shows an example of processing from receiving the designation of the imaging target point to displaying the captured image. Each process illustrated in FIG. 5 may be executed mainly by a control unit included in control device 200. In step (step may be abbreviated as S) 102, when operation unit 110 receives designation of an imaging target point, target point acquisition unit 202 acquires the specified imaging target point. In S104, the request information transmitting unit 204 broadcasts request information including the identification information of the own vehicle to the other vehicles 100. When the response receiving unit 206 receives the first response information within a predetermined time after the transmission of the request information in response to the request information broadcasted in S104 (YES in S106), the process proceeds to S120, and when the first response information is not received (S106). NO), and proceed to S108. In S108, if the response receiving unit 206 receives the second response information within a predetermined time from the transmission of the request information (YES in S108), the process proceeds to S114. If not (NO in S108), the process proceeds to S110. In S110, the request information transmitting unit 208 transmits the request information to the vehicle management server 300, and the captured image receiving unit 210 receives the captured image of the imaging target point from the vehicle management server 300. In S112, the display control unit 212 causes the display unit 120 to display the captured image. In S114, the request information transmitting unit 208 transmits the request information to the vehicle 100 that has transmitted the second response information, and the captured image receiving unit 210 receives the captured image of the imaging target point from the vehicle 100. In S118, the display control unit 212 causes the display unit 120 to display the captured image.”). Amento fail to explicitly disclose that upon two or more icons being selected upon the map, playing back each acquired video of each icon on the screen side by side. Hubbell teaches that upon two or more icons being selected upon the map, playing back each acquired video of each icon on the screen side by side (See at least Hubbell FIGS. 14-15 and Col. 16 “A fourth example of the present disclosure employing an integrated control panel 80 with touchscreen interface is shown in FIG. 14 . The video display 81 and touchscreen control panel are likewise integrated with the dash of the vehicle. The control panel 80 has touchscreen control icons which allow the driver to adjust the reflective members of each SCM. In this embodiment, an additional SA option utilizes steering wheel sensors which input a realtime video feed from either Camera-A or Camera-B automatically when turning, thereby allowing the driver to maintain his focus on the particular driving maneuver.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus of Amento to include that upon two or more icons being selected upon the map, playing back each acquired video of each icon on the screen side by side, as taught by Hubbell as disclosed above, in order to ensure efficient video playback (Hubbell Abstract “A method for enhancing situational awareness in a transportation vehicle includes locating at least one camera on the vehicle”). Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Seo (US Amento (US 20180343488 A1) (“Amento”) in view of Hubbell (US 12071075 B1) (“Hubbell”) further in view of Nishio (US 20040145663 A1) (“Nishio”). With respect to claim 7, and similarly claim 14, Amento in view of Hubbell fail to explicitly disclose that in a case in which two or more icons with different capture times are displayed for one vehicle, the controller is configured, upon one icon among the two or more icons being selected, to gray out icons other than the one icon. Nishio teaches that in a case in which two or more icons with different capture times are displayed for one vehicle, the controller is configured, upon one icon among the two or more icons being selected, to change the color of the icons other than the one icon (See at least Nishio FIG. 6and Paragraphs 71-72 “FIG. 6 is a diagram showing an example of a tree diagram for an option relating to icon change attribute settings when “with degrees” is selected in FIG. 4. Either “size”, “brightness” or the both can be selected in setting the “icon change attributes” and either “proportion,” “inverse proportion” or “constancy” can be selected for the degree standards. When “proportion” is selected, the size and the brightness of the icon are changed in proportion to the degree standard and when “inverse proportion” is selected, they are changed in inverse proportion to the degree standard. And when “constancy” is selected, the icons are displayed on the image selection screen with the size and the brightness set as constant, irrespective of the degree standard. Color or form may be inserted as an item under the icon change attributes.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus of Amento in view of Hubbell to include that in a case in which two or more icons with different capture times are displayed for one vehicle, the controller is configured, upon one icon among the two or more icons being selected, to change the color of the icons other than the one icon, as taught by Nishio as disclosed above, such that the icons are grayed out, in order to ensure accurate records of video selection (Nishio Paragraph 8 “The first object of the present invention, conceived in view of above problems, is to provide an image reproducing device for facilitating a selection of an image.”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IBRAHIM ABDOALATIF ALSOMAIRY whose telephone number is (571)272-5653. The examiner can normally be reached M-F 7:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached at 313-446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IBRAHIM ABDOALATIF ALSOMAIRY/ Examiner, Art Unit 3667 /KENNETH J MALKOWSKI/Primary Examiner, Art Unit 3667
Read full office action

Prosecution Timeline

Nov 24, 2023
Application Filed
Sep 24, 2025
Non-Final Rejection — §102, §103
Dec 10, 2025
Examiner Interview Summary
Dec 10, 2025
Applicant Interview (Telephonic)
Dec 22, 2025
Response Filed
Jan 10, 2026
Final Rejection — §102, §103
Apr 06, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602044
VEHICLE CONTROL SYSTEM, VEHICLE CONTROL METHOD, AND VEHICLE CONTROL PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12578728
AUTONOMOUS SNOW REMOVING MACHINE
2y 5m to grant Granted Mar 17, 2026
Patent 12426758
METHOD AND APPARATUS FOR CONTROLLING ROBOT, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Sep 30, 2025
Patent 12313379
SYSTEM FOR NEUTRALISING A TARGET USING A DRONE AND A MISSILE
2y 5m to grant Granted May 27, 2025
Patent 12265385
SYSTEMS, DEVICES, AND METHODS FOR MILLIMETER WAVE COMMUNICATION FOR UNMANNED AERIAL VEHICLES
2y 5m to grant Granted Apr 01, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
49%
With Interview (+8.4%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 82 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month