Prosecution Insights
Last updated: April 19, 2026
Application No. 18/873,744

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM

Non-Final OA §101§103§112
Filed
Dec 11, 2024
Examiner
BRADY III, PATRICK MICHAEL
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Honda Motor Co. Ltd.
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
67 granted / 119 resolved
+4.3% vs TC avg
Strong +44% interview lift
Without
With
+44.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
38 currently pending
Career history
157
Total Applications
across all art units

Statute-Specific Performance

§101
23.2%
-16.8% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
11.5%
-28.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 119 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This non-final action is in response to the application filed 11 December 2024. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Claims 1-12 are pending having a filing date of 11 December 2024, claiming the domestic benefit to the National Stage entry from PCT/JP2023/021897, filed 13 June 2023, and claiming foreign priority to the Japanese Application Number JP 2022-094965, filed 13 June 2022. Claims 2-6 and 10-12 have been amended via preliminary amendment, filed 11 December 2024. Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP 2022-094965, filed on 13 June 2022. Information Disclosure Statement The information disclosure statement (IDS) submitted 11 December 2024, complies with 35 C.F.R 1.97. Accordingly, the IDS has been considered by the examiner. An initialed copy of the 1449 form is enclosed herewith. Drawings The drawing, filed 11 December 2024, are accepted by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are “a recognition unit that recognizes” <see [0027] disclosing that the recognition unit 140, the prediction unit 150, the control unit 160, and the reception unit 170 are realized by, for example, allowing a hardware processor such as a central processing unit (CPU) to execute a program (software)> (claim 1); “a prediction unit that predicts” <see [0027]> (claim 1); and “a notification control unit that causes a notification device to notify” <see [0025] disclosing that the display unit 120 is, for example, a display device such as a touch panel or a liquid crystal display; [0026] disclosing that the voice output unit 130 is, for example, a speaker device. The voice output unit 130 outputs information on an object relating to the vicinity of the host vehicle M by voice output in accordance with the control by the control unit 160. The display unit 120 and the voice output unit 130 are an example of a "notification device."> (claim 1). Because these claim limitations are being interpreted under 35 U.S.C. 112(f) they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The “recognition unit” and the “prediction unit” are being interpreted per [0027] as a central processing unit. The “notification control unit” is being interpreted per [0025] as a touch panel or liquid crystal display, or a speaker device. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim 4 is rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claim 4 recites the limitation "the first predetermined area" in ln. 5. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. In January, 2019 (updated October 2019), the USPTO released new examination guidelines setting forth a two-step inquiry for determining whether a claim is directed to non-statutory subject matter. According to the guidelines, a claim is directed to non-statutory subject matter if: • STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or • STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: o STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? o STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? o STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that claims 1, 9, 10, 11 and 12 are directed toward non-statutory subject matter as shown below. STEP 1: Do claims 1, 9, 10, 11 and 12 fall within one of the statutory categories? Yes, because claim 1 is directed toward a device, claim 9 is directed toward a method, and claims 10-12 are directed toward a non-transitory computer-readable storage having stored thereon a program, all of which fall within one of the statutory categories. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? Yes, claims 1 abd 9-12 are directed to abstract ideas. With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: 1. Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; 2. Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and 3. Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). As per claims 1, and 9-12, the device (claim 1) and non-transitory computer-readable storage having a program stored (claims 9-12) are a mental processes that can be performed in the mind and, therefore, an abstract idea. In particular, claims 1 and 9-12 recite the following abstract ideas: “predicts a future trajectory of the object” (claims 1 and 9); “recognize an object included in image data obtained by capturing an image of a vicinity of a mobile body such that a future trajectory of the object is predicted” (claim 10); “recognize an object included in image data obtained by capturing an image of a vicinity of a two-wheel vehicle such that a future trajectory of the object is predicted” (claim 11); and “recognize an object included in image data obtained by capturing an image of a vicinity of a four-wheel vehicle such that a future trajectory of the object is predicted” (claim 12). These recitations merely consist of predicting the future trajectory of an object. This is equivalent to a person (observing the surrounds while operating the vehicle) predicting the future trajectory of an object. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). As such, a person, (observing the surrounds while operating the vehicle) predicts the future trajectory of an object. The mere nominal recitations that the future trajectory is predicted at the “prediction unit,” (claim 1) does not take the limitation out of the mental process grouping. STEP 2A (PRONG 2): Do the claims recite additional elements that integrate the judicial exception into a practical application? No, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: • an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; • an additional element that applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition; • an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; • an additional element effects a transformation or reduction of a particular article to a different state or thing; and • an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: • an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; • an additional element adds insignificant extra-solution activity to the judicial exception; and • an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. Claims 1 and 9-12 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into practical application. Claims 1 and 9-12 further recite the following additional elements: (claim 1 and 9) “recognizes an object included in image data obtained by capturing an image of a vicinity of a mobile body, ” “notify a vehicle occupant in the mobile body of the presence of the object based on the future trajectory of the object”, and “changes a mode of notification by the notification device ... ” (claim 10) “recognize an object included in image data obtained by capturing an image of a vicinity of a mobile body such that a future trajectory of the object is predicted”, “causing a notification device to notify a vehicle occupant in the mobile body of the presence of the object based on the future trajectory of the object” and “mode of notification by the notification device is changed between a case where the object is predicted to approach the mobile body from behind the mobile body and a case where the object is predicted to pass by a side of the mobile body”. (claim 11) “recognize an object included in image data obtained by capturing an image of a vicinity of a two-wheel vehicle such that a future trajectory of the object is predicted”, “causing a notification device to notify a vehicle occupant in the two-wheel vehicle of the presence of the object based on the future trajectory of the object”, “causing the computer to change a mode of notification by the notification device between a case where the object is predicted to approach the two-wheel vehicle from behind the two-wheel vehicle and a case where the object is predicted to pass by a side of the two-wheel vehicle”, and “causing the notification device to display a predetermined figure in a central part of the notification device in a case where the object is predicted to approach the two-wheel vehicle from behind the two-wheel vehicle and the object has entered a first predetermined area” (claim 12) “causing a computer to recognize an object included in image data obtained by capturing an image of a vicinity of a two-wheel vehicle such that a future trajectory of the object is predicted”, “causing a notification device to notify a vehicle occupant in the two-wheel vehicle of the presence of the object based on the future trajectory of the object”, “causing the computer to change a mode of notification by the notification device between a case where the object is predicted to approach the two-wheel vehicle from behind the two-wheel vehicle and a case where the object is predicted to pass by a side of the two-wheel vehicle”, and “causing the notification device to display a predetermined figure in a central part of the notification device in a case where the object is predicted to approach the two-wheel vehicle from behind the two-wheel vehicle and the object has entered a first predetermined area”. These additional elements further limit the abstract idea without integrating the abstract idea into practical application or significantly more. In particular, the “recognizing an object … “ step (claims 1, and 9-12) is recited at a high level of generality (i.e., as a general means of gathering an electronic representation of an area) and amount to mere data gathering, a form of insignificant extra-solution activity added to the judicial exception per MPEP 2106.05(g), because the step characterizes pre solution activity, such as an individual observing the surrounding vehicles. Further, the “notify a vehicle occupant ... “, “change a mode of notification ... “ (claims 1, 9-12) and “display a predetermined figure in a central part of the notification device” (claims 11 and 12), are recited at a high level of generality (i.e., as a general means of presenting electronic representation of an area) and amount to mere data gathering, a form of insignificant extra-solution activity added to the judicial exception per MPEP 2106.05(g), because the step characterizes post solution activity. Claim 1 still further includes the additional elements “a recognition unit”, and “a notification control unit”. These elements are not sufficient to amount to significantly more than the judicial exception because they fail to integrate the exception into practical application. The mere inclusion of instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea is indicative that the judicial exception has not been integrated into a practical application. In the instant case, the system accomplishes recognition of the object the actual image by “a recognition unit”, and notifies by “an notification control unit”, i.e. via computers. Thus, it is clear that the abstract idea is merely implemented on a computer, which is indicative of the abstract idea having not been integrated in the practical application. The “recognition unit,” and the “notification control unit” merely describes how to generally “apply” the otherwise metal judgements in a generic or general purpose computing environment. The recognition unit, and the notification control unit are recited at a high level of generality and merely automate the recognition and notifying steps. STEP 2B: Do the claims recite additional elements that amount to significantly more than the judicial exception? No, claims 1 and 9-12 do not recite additional elements that amount to significantly more than the judicial exception. With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements: • adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or • simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present. Claims 1 and 9-12 do not recite any specific limitation or combination of limitations that are well-understood, routine, conventional (WURC) activity in the field. Displaying data is fundamental, i.e. WURC, activities performed computers operating on data such as the units recited in claim 1. Further, applicant’s specification does not provide any indication that the recognizing or notifying activities of the device are performed using anything other than a conventional computer. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere performance of an action is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Further, the Federal Circuit in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017), for example, indicated that the mere displaying of data is a well understood, routine, and conventional function. Thus, since claims 1 and 9-12 are: (a) directed toward abstract ideas; (b) do not recite additional elements that integrate the judicial exception into practical application; and (c) do not recites additional elements that amount to significantly more than the judicial exception, it is clear that claims 1 and 9-12 are directed to non-statutory subject matter. Dependent claims 2-8 further limit the abstract idea without integrating the abstract idea into practical application or adding significantly more. For example, the additional elements in claims 2-6 are further limitations that under their broadest reasonable interpretation are abstract using the analysis for independent claim 1. Further, the additional elements in claims 7 and 8 are further limitations that under their broadest reasonable interpretations are limitations that are further limit the abstract idea without integrating the abstract idea into practical application or significantly more. As such, claims 1-12, are rejected as being drawn to an abstract idea without significantly more, and thus are ineligible Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1, 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication Number 2019/0041849 to Kida et al. (hereafter Kida) in view of U.S. Patent Publication Number 2018/0272940 to Saeki et al. (hereafter Saeki). As per claim 1, Kida discloses [a]n information processing device (see at least Kida, Abstract) comprising: a recognition unit that recognizes an object included in image data obtained by capturing an image of a vicinity of a mobile body (see at least Kida, [0059] disclosing that the camera system 20 includes the camera 21 and an image processing device 23. As shown in FIG. 2, the camera 21 is arranged at the center in a width direction in a rear end of the own vehicle SV. Referring back to FIG. 1, the camera 21 is attached to an outer side of the own vehicle SV. .. The camera 21 photographs the scene to transmit an image (a camera image or camera image data) photographed by the camera 21 to the image processing device 23, every time a predetermined time period elapses; [0060] disclosing that the image processing device 23 selects/extracts an object whose type is coincident with one of predetermined types (a pedestrian, a vehicle, a motorcycle, a bicycle, and the like) from the camera image photographed by the camera 21. More specifically, the image processing device 23 stores an image feature amount of the object of each of the predetermined types as a matching pattern in advance. The image processing device 23 divides the camera image into local areas, each of which has a predetermined size, so as to calculate the image feature amount of each of the local areas.); a prediction unit that predicts a future trajectory of the object (see at least Kida, [0110] disclosing that the CPU 11 calculates/predicts a moving trajectory of the object based on the past locations/positions of the object. The CPU 11 calculates/predicts a moving direction of the object in relation to the own vehicle SV, based on the calculated moving trajectory of the object. Subsequently, the CPU 11 selects/extracts, as the obstacle(s) which has a probability (high probability) of colliding with the own vehicle S; [0114] disclosing that the CPU 11 predicts the “trajectory/path along which the point PL will move” as the predicted left travel path LEC, and predicts the “trajectory/path along which the point PR will move” as the predicted right travel path REC. If both of the values αL and αR are positive values, the CPU 11 determines the “object which has been in the predicted travel path area ECA and will intersect with the rear end area TA” or the “object which will be in the predicted travel path area ECA and will intersect with the rear end area TA”, as the object with probability of passing near the left side or the right side of the own vehicle SV.” Accordingly, the CPU 11 can select/extract, as the obstacle, the object with the probability of passing near the left side or the right side of the own vehicle SV) ... . But, Kida does not explicitly teach the following limitations taught in Saeki: a notification control unit that causes a notification device to notify a vehicle occupant in the mobile body of the presence of the object based on the future trajectory of the object (see at least Saeki, Abstract, disclosing a detecting unit 70 configured to perform moving object recognition and detect presence of a rear moving object in the first video data; and a display control unit 90 configured to display video clipped as the first area on a rearview monitor 140 that displays rear video of the vehicle, and display video clearly indicating presence of a detected rear moving object if the detecting unit 70 detects presence of the rear moving object), wherein the notification control unit changes a mode of notification by the notification device between a case where the object is predicted to approach the mobile body from behind the mobile body and a case where the object is predicted to pass by a side of the mobile body (see at least Saeki, Fig. 4, [0048] disclosing that FIG. 4 is a diagram illustrating an example of video data captured by the rear camera of the on-vehicle display system according to the first embodiment. The rear camera 110 is capable of capturing video in a wider area than the area displayed on the rearview monitor 140; however, the rear camera 110 clips a first area AC as an area that allows a driver of the vehicle 100 to appropriately recognize the rear side using the rearview monitor 140, and displays the first area AC on the rearview monitor 140; Fig. 10, [0101] disclosing that with regard to Fig. 10, It is assumed that a rear moving object V3 is moving from the first area AC to the blind spot area BL and a rear moving object V4 is receding from the vehicle 100 as displayed in first video data 110A1 and first video data 110A2. At Step S21, the display control unit determines that the rear moving objects V3 and V4 are detected in the first video data 110A1 (Yes at Step S21). Subsequently, the display control unit determines that the rear moving object V3 is moving from the first area AC to the blind spot area BL (Yes at Step S22). Then, it is determined that the rear moving object V3 is approaching (Yes at Step S23); [0123]; [0124]; [0128]; Fig. 30, [0178] disclosing that FIG. 30 is a diagram illustrating another example of video displayed on the rearview monitor, the right side monitor, and the left side monitor on the on-vehicle display system according to the fifth embodiment. The display control unit 90B causes the rearview monitor 140 to display rear video data 110C2B, the right side monitor 150 to display the right rear video data 110R2, and the left side monitor 160 to display the left rear video data 110L2. In this case, the rear moving object V1 has moved from the second area AL in the first video data 110A2, so that the rear moving object V1 is not displayed on the left side monitor 160). Kida and Saeki are analogous art to claim 1 because they are in the same filed of recognizing and object included in image data. Kida relates to a monitor device for performing a support control to support driving of an own vehicle based on a camera image photographed by a camera for photographing an area around the own vehicle through a protection window (see at least Kida, [0001]). Saeki relates to an on-vehicle display device, system and method that displays data from a rear camera that is arranged on a rear part of the vehicle and configured to image a rear side of the vehicle (see at least Saeki, abstract, [0001]). Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Kida, to provide the benefit of notifying a vehicle occupant in the mobile body of the presence of the object based on the future trajectory of the object, and changing a mode of notification between a case where the object is predicted to approach the mobile body from behind the mobile body and a case where the object is predicted to pass by a side of the mobile body, as disclosed in Saeki, with a reasonable expectation of success. Doing so would provide the benefit of improving safety by notifying the driver in the instance where vehicles are located in the blind spot (see at least Saeki, [0004]). As per claim 9, similar to claim 1, Kida discloses [a]n information processing method (see at least Kida, Abstract) , wherein a computer recognizes an object included in image data obtained by capturing an image of a vicinity of a mobile body (see at least Kida, [0059]; [0060]), predicts a future trajectory of the object (see at least Kida, [0110]; [0114]) ... . But Kida does not explicitly teach the following limitations taught in Saeki: causes a notification device to notify a vehicle occupant in the mobile body of the presence of the object based on the future trajectory of the object (see at least Saeki, Abstract), and changes a mode of notification by the notification device between a case where the object is predicted to approach the mobile body from behind the mobile body and a case where the object is predicted to pass by a side of the mobile body (see at least Saeki, Fig. 4, [0048]; Fig. 10, [0101]; [0123]; [0124]; [0128]; Fig. 30, [0178]). Kida and Saeki are analogous art to claim 9 because they are in the same filed of recognizing and object included in image data. Kida relates to a monitor device for performing a support control to support driving of an own vehicle based on a camera image photographed by a camera for photographing an area around the own vehicle through a protection window (see at least Kida, [0001]). Saeki relates to an on-vehicle display device, system and method that displays data from a rear camera that is arranged on a rear part of the vehicle and configured to image a rear side of the vehicle (see at least Saeki, abstract, [0001]). Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Kida, to provide the benefit of notifying a vehicle occupant in the mobile body of the presence of the object based on the future trajectory of the object, and changing a mode of notification between a case where the object is predicted to approach the mobile body from behind the mobile body and a case where the object is predicted to pass by a side of the mobile body, as disclosed in Saeki, with a reasonable expectation of success. Doing so would provide the benefit of improving safety by notifying the driver in the instance where vehicles are located in the blind spot (see at least Saeki, [0004]). As per claim 10, similar to claims 1 and 9, Kida discloses ... to execute: causing a computer to recognize an object included in image data obtained by capturing an image of a vicinity of a mobile body (see at least Kida, [0059]; [0060]) such that a future trajectory of the object is predicted (see at least Kida, [0110]; [0114]) ... . But Kida does not explicitly teach the following limitations taught in Saeki: [a] non-transitory computer-readable storage having stored thereon a program for causing a computer to execute (see at least Saeki, [0008] disclosing a non-transitory storage medium stores a program according to one aspect for causing a computer serving as an on-vehicle display control device to execute steps of acquiring first video data from a rear camera that is arranged on a rear part of a vehicle and that is configured to image a rear side of the vehicle; Claim 9) causing a notification device to notify a vehicle occupant in the mobile body of the presence of the object based on the future trajectory of the object such that a mode of notification by the notification device is changed between a case where the object is predicted to approach the mobile body from behind the mobile body and a case where the object is predicted to pass by a side of the mobile body (see at least Saeki, Abstract, Fig. 4, [0048]; Fig. 10, [0101]; [0123]; [0124]; [0128]; Fig. 30, [0178]). Kida and Saeki are analogous art to claim 10 because they are in the same filed of recognizing and object included in image data. Kida relates to a monitor device for performing a support control to support driving of an own vehicle based on a camera image photographed by a camera for photographing an area around the own vehicle through a protection window (see at least Kida, [0001]). Saeki relates to an on-vehicle display device, system and method that displays data from a rear camera that is arranged on a rear part of the vehicle and configured to image a rear side of the vehicle (see at least Saeki, abstract, [0001]). Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Kida, to provide the benefit of having a non-transitory computer-readable storage having stored thereon a program for causing a computer to execute instructions, notifying a vehicle occupant in the mobile body of the presence of the object based on the future trajectory of the object such that a mode of notification by the notification device is changed between a case where the object is predicted to approach the mobile body from behind the mobile body and a case where the object is predicted to pass by a side of the mobile body, as disclosed in Saeki, with a reasonable expectation of success. Doing so would provide the benefit of improving safety by notifying the driver in the instance where vehicles are located in the blind spot (see at least Saeki, [0004]). Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Kida and Saeki as applied to claim 1 above, and further in view of U.S. Patent Publication Number 2024/0054897 to Shoji et al. (hereafter Shoji). As per claim 2, the combination of Kida and Saeki discloses all of the limitations of claim 1 above. But, neither Kida nor Saeki explicitly teach the following limitations taught in Shoji: wherein the notification device is a voice output device (see at least Shoji, [0088] disclosing that the HMI 31 generates and outputs, for example, as the auditory information, information represented by sound, such as a voice guidance, a warning sound, and a warning message. Further, the HMI 31 generates and outputs, as the tactile information, information given to the tactile sense of an occupant by, for example, a force, a vibration, a movement ), and the notification control unit causes the voice output device to output a warning sound in a case where the object is predicted to approach the mobile body from behind the mobile body and the object has entered a first predetermined area (see at least Shoji, [0207] disclosing that , a comparison of a zone ZL41 to the left rear of the vehicle 1 in FIG. 16 and a zone ZR41 to the right rear of the vehicle 1 in FIG. 16 with a zone ZL51 to the left rear of the vehicle 1 in FIG. 17 and a zone ZR51 to the right rear of the vehicle 1 in FIG. 17 indicates that the zone ZL41 and the zone ZR41 are closer to the vehicle 1 than the zone ZL51 and the zone ZR51. Hence, in a case where an object is present within the zone ZL41 or the zone ZR41, the warning sound control section 251, for example, makes the warning sound high pitched (raised to a high frequency) as compared with a case where the object is present within the zone ZL51 or the zone ZR51). Kida, Saeki and Shoji are analogous art to claim 2 because they are in the same filed of recognizing and object included in image data. Kida relates to a monitor device for performing a support control to support driving of an own vehicle based on a camera image photographed by a camera for photographing an area around the own vehicle through a protection window (see at least Kida, [0001]). Saeki relates to an on-vehicle display device, system and method that displays data from a rear camera that is arranged on a rear part of the vehicle and configured to image a rear side of the vehicle (see at least Saeki, abstract, [0001]). Shoji relates to an information processing device that performs detection processing for an object on a periphery of a mobile device (see at least Shoji, Abstract). Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Kida, as modified by Saeki, to provide the benefit of having the notification device be a voice output device and causing the voice output device to output a warning sound in a case where the object is predicted to approach the mobile body from behind the mobile body and the object has entered a first predetermined area, as disclosed in Shoji, with a reasonable expectation of success. Doing so would provide the benefit of improving safety by audibly notifying the driver in the instance where vehicles are located in various areas of the blind spot. As per claim 3, the combination of Kida and Saeki discloses all of the limitations of claim 1 above. But, neither Kida nor Saeki explicitly teach the following limitations taught in Shoji: wherein the notification device is a voice output device (see at least Shoji, [0088]), and the notification control unit outputs a warning sound indicating whether the side is left or right in a case where the object is predicted to pass by the side of the object and the mobile body has entered a second predetermined area (see at least Shoji, [0207]). Kida, Saeki and Shoji are analogous art to claim 3 because they are in the same filed of recognizing and object included in image data. Kida relates to a monitor device for performing a support control to support driving of an own vehicle based on a camera image photographed by a camera for photographing an area around the own vehicle through a protection window (see at least Kida, [0001]). Saeki relates to an on-vehicle display device, system and method that displays data from a rear camera that is arranged on a rear part of the vehicle and configured to image a rear side of the vehicle (see at least Saeki, abstract, [0001]). Shoji relates to an information processing device that performs detection processing for an object on a periphery of a mobile device (see at least Shoji, Abstract). Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Kida, as modified by Saeki, to provide the benefit of having the notification device be a voice output device and outputting a warning sound indicating whether the side is left or right in a case where the object is predicted to pass by the side of the object and the mobile body has entered a second predetermined area, as disclosed in Shoji, with a reasonable expectation of success. Doing so would provide the benefit of improving safety by audibly notifying the driver in the instance where vehicles are located in various areas of the blind spot. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Kida and Saeki as applied to claim 1 above, and further in view of U.S. Patent Publication Number 2024/0140469 to Shimizu. As per claim 4, the combination of Kida and Saeki discloses all of the limitations of claim 1 above. But, neither Kida nor Saeki explicitly teach the following limitations taught in Shimizu: wherein the notification control unit outputs a warning sound indicating whether the side is left or right in a case where the object is predicted to pass by the side of the object and the mobile body has entered a second predetermined area (see at least Shimizu, Fig. 6, [0070] disclosing that, with regard to Fig. 6, the warning area 71L, which is the left rear warning area set to the left rear of the host vehicle 40, can also be set or changed similarly to the warning area 71R, which is the right rear warning area. The area setting part 35 linearly extends the lateral lines L0 to L12 shown in FIG. 5 to the left of the traveling trajectory of the host vehicle 40 and sets points D0 to D12 and E0 to E12 on the lateral lines L0 to L12. On the lateral lines Li, the distances between the points Ai and points Di are all Y3, and the distances between the points Bi and Ei are all Y4. In FIG. 6, only i=0 to 7 are illustrated, and i=8 to 12 are omitted ; Fig. 8, [0076] disclosing that if the warning area surrounded by the points G0 to G12 and H0 to H12 is maintained as shown in FIG. 8, the other vehicle 41 is detected within the warning area and a notification is issued. According to the embodiment described above, since the warning area can be changed based on the variation in curvature of the traveling trajectory of the host vehicle 40, even when the host vehicle 40 is stopped, the warning area can be appropriately narrowed as with the warning area 72L shown in FIG. 8, making it possible to prevent the other vehicle 41 from being detected and a notification being issued ; Fig. 10, [0080] disclosing that as shown in FIG. 10, in addition to detection areas 80L and 80R that extend laterally to the left rear and right rear of the host vehicle 40, the detection region of the object detection part 33 may include a detection area 82C extending to the rear of the host vehicle 40 and detection areas 82L and 82R on the left and right of the detection area 82C that are also extending to the rear.; Fig. 11, [0081] disclosing that as shown in FIG. 11, a warning area 81C may set behind the host vehicle 40. As shown in FIG. 11, the rear center warning area 81C is preferably set in between the right rear warning area 81R and the left rear warning area 81L of the host vehicle 40.), the first predetermined area is an area that is present within a first distance from the mobile body (see at least Shimizu, Fig. 6, [0070]; Fig. 8, [0076] Fig. 10, [0080]; Fig. 11, <showing Distances Do-D6, where D1-D5 are interpreted as a first distance> [0081]), and the second predetermined area is an area that is present away from the mobile body by the first distance or more and within a second distance that is larger than the first distance (see at least Shimizu, Fig. 6, [0070]; Fig. 8, [0076] Fig. 10, [0080]; Fig. 11, <showing distances D0 - D6, where if D6 is interpreted as the second distance, and D1 is interpreted as the first distance, then the second distance is larger than the first> [0081]). Kida, Saeki and Shimizu are analogous art to claim 4 because they are in the same filed of recognizing and object included in image data. Kida relates to a monitor device for performing a support control to support driving of an own vehicle based on a camera image photographed by a camera for photographing an area around the own vehicle through a protection window (see at least Kida, [0001]). Saeki relates to an on-vehicle display device, system and method that displays data from a rear camera that is arranged on a rear part of the vehicle and configured to image a rear side of the vehicle (see at least Saeki, abstract, [0001]). Shimizu relates to a driving assistance device that performs driving assistance based on information on objects detected around a vehicle (see at least Shimizu, [0002]). Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Kida, as modified by Saeki, to provide the benefit of outputting a warning sound indicating whether the side is left or right in a case where the object is predicted to pass by the side of the object and the mobile body has entered a second predetermined area, having the first predetermined area be an area that is present within a first distance from the mobile body and having the second predetermined area be an area that is present away from the mobile body by the first distance or more and within a second distance that is larger than the first distance, as disclosed in Shimizu, with a reasonable expectation of success. Doing so would provide the benefit of improving safety by notifying the driver in the instance where vehicles are located in various areas of the blind spot. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Kida and Saeki as applied to claim 1 above, and further in view of Shoji and U.S. Patent Publication Number 20230339494 to Hack et al. (hereafter Hack). As per claim 5, the combination of Kida and Saeki discloses all of the limitations of claim 1, as shown above. Kida further discloses the following limitations: wherein the notification device is a display device (see at least Kida, [0073] disclosing that when the time to collision TTC is equal to or shorter than a time threshold T1th for an alert control, this monitor device transmits the above display instruction signal to the display unit 30 and the above output instruction signal to the speaker 31 so as to perform the alert control for alerting the driver of the presence of the obstacle. The alert control is one of support controls which support the driving by the driver), the information processing device further includes a reception unit that receives instruction information indicating whether the mobile body is a four-wheel vehicle or a two-wheel vehicle (see at least Kida, [0103] disclosing that CPU 11 calculates a turning radius of the own vehicle SV based on “the vehicle velocity Vs of the own vehicle SV and the yaw rate Yr” included in the vehicle state information acquired at Step 615. Thereafter, the CPU 11 predicts, as the predicted travel path RCR, a travel path along which “the center point PO (refer to FIG. 2) of a wheel axis connecting a rear left wheel and a rear right wheel” will move, based on the calculated turning radius. When the yaw rate Yr is generated (nonzero), the CPU 11 predicts an arc travel path as the predicted travel path RC < interpreted as instruction information indicating the mobile object is a four-wheeled vehicle>) ... . Saeki further discloses the following limitation: ... (1) ... the notification control unit causes the display device to display a predetermined figure in a central part of the display device in a case where the object is predicted to approach the mobile body from behind the mobile body (see at least Saeki, Fig. 4, [0048]; Fig. 10, [0101]; [0123]; [0124]; [0128]; Fig. 30, [0178]) ... (2) ... . But, neither Kida nor Saeki explicitly teach the following limitation taught in Hack: (1) in a case where the reception unit receives instruction information indicating that the mobile body is a two-wheel vehicle (see at least Hack, [0057] disclosing that the alternative trajectory 134 is checked for its drivability with the aid of a mathematical single-track model of motorcycle 100. In this way it can be ensured that motorcycle 100 is able to travel alternative trajectory 134 without radical changes in velocity and/or radical changes in direction and without stability loss) ... . But, neither Kida, Saeki nor Hack explicitly teach the following limitation taught in Shoji: (2) display a predetermined figure in a central part of the display device in a case where the object is predicted to approach the mobile body from behind the mobile body and the object has entered a first predetermined area (see at least Shoji, see claims 2 and 3, [0207]). Kida, Saeki, Shoji and Hack are analogous art to claim 5 because they are in the same filed of recognizing and object included in image data. Kida relates to a monitor device for performing a support control to support driving of an own vehicle based on a camera image photographed by a camera for photographing an area around the own vehicle through a protection window (see at least Kida, [0001]). Saeki relates to an on-vehicle display device, system and method that displays data from a rear camera that is arranged on a rear part of the vehicle and configured to image a rear side of the vehicle (see at least Saeki, abstract, [0001]). Shoji relates to an information processing device that performs detection processing for an object on a periphery of a mobile device (see at least Shoji, Abstract). Hack relates to a method acquiring an environment of the motorcycle and free spaces in the environment (see Hack, Abstract). Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Kida, as modified by Saeki, to provide the benefit of receiving instruction information indicating that the mobile body is a two-wheel vehicle, as disclosed in Hack, with a reasonable expectation of success. It would further be prime facie obvious to modify the device, as disclosed in Kida, as modified by Saeki and Hack, to provide the benefit of displaying a predetermined figure in a central part of the display device in a case where the object is predicted to approach the mobile body from behind the mobile body and the object has entered a first predetermined area, as disclosed in Shoji, with a reasonable expectation of success. Doing so would the benefit of improving safety by notifying the rider of a two wheel vehicle in the instance where vehicles are located in various areas of the blind spot. Claims 6, 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Kida and Saeki as applied to claim 1 above, and further in view of Shoji and U.S. Patent Publication Number 2023/0146403 to Jun et al. (hereafter Jun). As per claim 6, the combination of Kida and Saeki discloses all of the limitations of claim 1, as shown above. Kida further discloses the following limitations: wherein the notification device is a display device (as in claim 5, see at least Kida, [0073]), the information processing device further includes a reception unit that receives instruction information indicating whether the mobile body is a four-wheel vehicle or a two-wheel vehicle (as seen in claim 5, see at least Kida, [0103]), ... ... (1) .... But, neither Kida nor Saeki explicitly teach the following limitation taught in Jun: (1) in a case where the reception unit receives instruction information indicating that the mobile body is a four-wheel vehicle, the notification control unit causes the display device to dynamically display a predetermined figure diagonally upward from a lower left corner portion or a lower right corner portion of the display device (see at least Jun, Fig. 24, [0410] disclosing that as illustrated in FIG. 24, if the direction in which the preset input of the third type is a first direction (e.g., right), a first image 2200b may be received from a camera (Camera B) present at a position opposite to the first direction and output it on the display unit, and may reduce and output the screen 2100 being outputted on the display at the position (left side) opposite to the first direction; Fig. 25, [0413] disclosing that as illustrated in FIG. 25, if the direction in which the preset input of the third type is applied is a second direction (e.g., left), a second image 2200C may be received from a camera (Camera C) present at a position opposite to the second direction and output it on the display unit, and may reduce and output the screen 2100 being outputted at a position (right) opposite to the first direction) ... , ... (2) ... . But, neither Kida, Saeki nor Jun explicitly teach the following limitation taught in Shoji: (2) which corresponds to a direction in which the object has entered, in a case where the object is predicted to pass by the side of the mobile body and the object has entered a second predetermined area (as seen in claim 3, see at least Shoji, [0207]). Kida, Saeki, Shoji and Jun are analogous art to claim 6 because they are in the same filed of recognizing and object included in image data. Kida relates to a monitor device for performing a support control to support driving of an own vehicle based on a camera image photographed by a camera for photographing an area around the own vehicle through a protection window (see at least Kida, [0001]). Saeki relates to an on-vehicle display device, system and method that displays data from a rear camera that is arranged on a rear part of the vehicle and configured to image a rear side of the vehicle (see at least Saeki, abstract, [0001]). Shoji relates to an information processing device that performs detection processing for an object on a periphery of a mobile device (see at least Shoji, Abstract). Jun relates to a display control device that controls a vehicle display (see Jun, [0001]). Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Kida, as modified by Saeki, to provide the benefit of displaying a predetermined figure diagonally upward from a lower left corner portion or a lower right corner portion of the display device, as disclosed in Jun, with a reasonable expectation of success. It would further be prime facie obvious to modify the device, as disclosed in Kida, as modified by Saeki and Jun, to provide the benefit of having the display correspond to a direction in which the object has entered, in a case where the object is predicted to pass by the side of the mobile body and the object has entered a second predetermined area, as disclosed in Shoji, with a reasonable expectation of success. Doing so would the benefit of improving safety by notifying the driver of a four wheel vehicle in the instance where vehicles are located in various areas of the blind spot. As per claim 7, the combination of Kida, Saeki, Shoji and Jun discloses all of the limitations of claim 6, as shown above. Saeki further discloses the following limitation: wherein the notification control unit increases a speed at which the predetermined figure is dynamically displayed as a relative speed of the object with respect to a speed of the mobile body increases (see at least Saeki, [0213] disclosing that at Step SB11 in the flowchart illustrated in Fig. 26, the display control unit 90B may determine whether a distance between the rear moving object V and the vehicle 100, a relative speed of the rear moving object V and the vehicle 100, and a moving state, such as a moving direction, of the rear moving object V meet predetermined conditions in addition to determining whether the rear moving object V is detected in the first video data 110A. More specifically, for example, when the detecting unit 70 detects that the distance between the rear moving object V and the vehicle 100 is equal to or shorter than a predetermined distance, the display control unit 90B may determine that the moving state satisfies a predetermined condition and perform the processes at Steps SB12 to SB20). As per claim 8, the combination of Kida, Saeki, Shoji and Jun discloses all of the limitations of claim 6, as shown above. Saeki further discloses the following limitation: wherein the notification control unit increases a size of the predetermined figure as the object approaches the mobile body (see at least Saeki, [0063] disclosing that recognizing unit 71 recognizes whether a rear moving object V1 is located close to or away from the vehicle 100 based on a change in the size of the rear moving object V1 in each of the frames in the first video data 110A. The detecting unit 70 outputs a detection result to the display control unit 90). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kida in view of Saeki and Hack. As per claim 11, Kida disclose the following limitations: ... (1) ... to execute: causing a computer to recognize an object included in image data obtained by capturing an image (see at least Kida, [0059]; [0060] ) ... (2) ... vehicle such that a future trajectory of the object is predicted (see at least Kida, [0110]; [0114]) ... (3) ... ;... (4) ... ; ... (5) ... ; ... (6) ... . But, Kida does not teach the following limitations taught in Saeki: (1) [a] non-transitory computer-readable storage having stored thereon a program for causing a computer (as in claims 9 and 10, see at least Saeki, [0008] ), (3) causing a notification device to notify a vehicle occupant in the two-wheel vehicle of the presence of the object based on the future trajectory of the object (as in claim 9, see at least Saeki, Abstract); (4) causing the computer to change a mode of notification by the notification device between a case where the object is predicted to approach the two-wheel vehicle from behind the two-wheel vehicle and a case where the object is predicted to pass by a side of the two-wheel vehicle (as in claim 9, see at least Saeki, Abstract, Fig. 4, [0048]; Fig. 10, [0101]; [0123]; [0124]; [0128]; Fig. 30, [0178] ), and; (5) causing the notification device to display a predetermined figure in a central part of the notification device in a case where the object is predicted to approach the two-wheel vehicle from behind (as in claim 5, see at least Saeki, Fig. 4, [0048]; Fig. 10, [0101]; [0123]; [0124]; [0128]; Fig. 30, [0178]) ... . But, neither Kida nor Saeki explicitly teach the following limitation taught in Hack: (2) capturing an image of a vicinity of a two-wheel (as in claim 5, see at least Hack, [0044] disclosing that Camera 104 acquires a section of an environment 106 of motorcycle 100 and images it in an item of environment information 108; [0057]) ... . But, neither Kida, Saeki nor Hack explicitly teach the following limitation taught in Shoji: (6) display a predetermined figure in a central part of the notification device in a case where the object is predicted to approach the two-wheel vehicle from behind the two-wheel vehicle and the object has entered a first predetermined area (as in claims 2, 3 and 5, see at least Shoji, [0207]). Kida, Saeki, Shoji and Hack are analogous art to claim 11 because they are in the same filed of recognizing and object included in image data. Kida relates to a monitor device for performing a support control to support driving of an own vehicle based on a camera image photographed by a camera for photographing an area around the own vehicle through a protection window (see at least Kida, [0001]). Saeki relates to an on-vehicle display device, system and method that displays data from a rear camera that is arranged on a rear part of the vehicle and configured to image a rear side of the vehicle (see at least Saeki, abstract, [0001]). Shoji relates to an information processing device that performs detection processing for an object on a periphery of a mobile device (see at least Shoji, Abstract). Hack relates to a method acquiring an environment of the motorcycle and free spaces in the environment (see Hack, Abstract). Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Kida, to provide the benefit of (1) having a non-transitory computer-readable storage having stored thereon a program for causing a computer to execute the steps, (3) notifying a vehicle occupant in the two-wheel vehicle of the presence of the object based on the future trajectory of the object, (4) changing a mode of notification by the notification device between a case where the object is predicted to approach the two-wheel vehicle from behind the two-wheel vehicle and a case where the object is predicted to pass by a side of the two-wheel vehicle, and (5) displaying a predetermined figure in a central part of the notification device in a case where the object is predicted to approach the two-wheel vehicle from behind, as disclosed by Saeki, with a reasonable expectation of success. It would have further been obvious to modify the device as, disclosed in Kido as modified by Saeki, to provide the benefit of (2) capturing an image of a vicinity of a two-wheel, as disclosed in Hack, with a reasonable expectation of success. It would further obvious to modify the device, as disclosed in Kida, as modified by Saeki to provide the benefit of (2) as disclosed in Hack, with a reasonable expectation of success. And it would have been still further obvious to modify the device as disclosed in Kia, as modified by Saeki and Hack, to provide the benefit of (6) displaying a predetermined figure in a central part of the display device in a case where the object is predicted to approach the mobile body from behind the mobile body and the object has entered a first predetermined area, as disclosed in Shoji, with a reasonable expectation of success. Doing so would the benefit of improving safety by notifying the rider of a two wheel vehicle in the instance where vehicles are located in various areas of the blind spot. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Kida, Saeki, Shoji and Jun. As per claim 12, similar to claims 1, 9 and 10, Kida disclose ... (1) ... to execute: causing a computer to recognize an object included in image data obtained by capturing an image of a vicinity of a four-wheel vehicle (see at least Kida, [0059]; [0060]) such that a future trajectory of the object is predicted (see at least Kida, [0110]; [0114]) ... , and a four wheeled vehicle (see at least Kida [0103] disclosing that CPU 11 calculates a turning radius of the own vehicle SV based on “the vehicle velocity Vs of the own vehicle SV and the yaw rate Yr” included in the vehicle state information acquired at Step 615. Thereafter, the CPU 11 predicts, as the predicted travel path RCR, a travel path along which “the center point PO (refer to FIG. 2) of a wheel axis connecting a rear left wheel and a rear right wheel” will move, based on the calculated turning radius. When the yaw rate Yr is generated (nonzero), the CPU 11 predicts an arc travel path as the predicted travel path RC) ... (2) ... , ... (3) ... , ... (4) ... , ... (5) ... . But Kida does not explicitly teach the following limitations taught in Saeki: (1) [a] non-transitory computer-readable storage having stored thereon a program for causing a computer (as in claim 9, see at least Saeki, [0008]; Claim 9) (2) causing a notification device to notify a vehicle occupant in the four-wheel vehicle of the presence of the object based on the future trajectory of the object (as in claim 1, see at least Saeki, Abstract); (3) causing the computer to change a mode of notification by the notification device between a case where the object is predicted to approach the four-wheel vehicle from behind the four-wheel vehicle and a case where the object is predicted to pass by a side of the four-wheel vehicle (as in claim 1, see at least Saeki, Fig. 4, [0048]; Fig. 10, [0101]; [0123]; [0124]; [0128]; Fig. 30, [0178]). But, neither Kida nor Saeki explicitly teach the following limitation taught in Jun: (4) causing the notification device to dynamically display a predetermined figure diagonally upward from a lower left corner portion or a lower right corner portion of the notification device, which corresponds to a direction in which the object has entered (as in claim 6, see at least Jun, Fig. 24, [0410]; Fig. 25, [413]) ... . But, neither Kida, Saeki nor Jun explicitly teach the following limitation taught in Shoji: (5) in a case where the object is predicted to pass by the side of the four-wheel vehicle and the object has entered a second predetermined area (as in claim 6, see at least Shoji, [0207]). Kida, Saeki, Shoji and Jun are analogous art to claim 12 because they are in the same filed of recognizing and object included in image data. Kida relates to a monitor device for performing a support control to support driving of an own vehicle based on a camera image photographed by a camera for photographing an area around the own vehicle through a protection window (see at least Kida, [0001]). Saeki relates to an on-vehicle display device, system and method that displays data from a rear camera that is arranged on a rear part of the vehicle and configured to image a rear side of the vehicle (see at least Saeki, abstract, [0001]). Shoji relates to an information processing device that performs detection processing for an object on a periphery of a mobile device (see at least Shoji, Abstract). Jun relates to a display control device that controls a vehicle display (see Jun, [0001]). Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Kida, to provide (1) a non-transitory computer-readable storage having stored thereon a program for causing a computer to execute the steps, (2) notifying a vehicle occupant in the four-wheel vehicle of the presence of the object based on the future trajectory of the object and (3) changing a mode of notification by the notification device between a case where the object is predicted to approach the four-wheel vehicle from behind the four-wheel vehicle and a case where the object is predicted to pass by a side of the four-wheel vehicle, as disclosed in Saeki, with a reasonable expectation of success. If would have been further obvious to have modified the device as disclosed in Kida as modified by Saeki to provide the benefit of (4) dynamically displaying a predetermined figure diagonally upward from a lower left corner portion or a lower right corner portion of the notification device, which corresponds to a direction in which the object has entered, as disclosed in Jun with a reasonable expectation of success. It would still further be prime facie obvious to modify the device, as disclosed in Kida, as modified by Saeki and Jun, to provide the benefit of (5) having the display correspond to a direction in which the object has entered, in a case where the object is predicted to pass by the side of the mobile body and the object has entered a second predetermined area, as disclosed in Shoji, with a reasonable expectation of success. Doing so would the benefit of improving safety by notifying the driver of a four wheel vehicle in the instance where vehicles are located in various areas of the blind spot. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent Publication Number 2018/0288320 to Melick et al. (hereafter Melick ) see, Fig. 4; [0042] disclosing a prediction system can receive the state data from the perception system and predict one or more future locations for each object based on such state data; [0077] disclosing that the plurality of cameras are mounted and positioned, relative to the LIDAR system and the autonomous vehicle 402, to provide camera fields of view 404, 406, 408, 410, and 412 around the periphery of the autonomous vehicle 402. The LIDAR device is configured to generate LIDAR sweeps for use in detecting the location of objects around the autonomous vehicle. As further illustrated in FIG. 4, the cameras are positioned to create a plurality of field of view overlaps between the camera fields of view. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK M. BRADY III whose telephone number is (571)272-7458. The examiner can normally be reached Monday - Friday 8:00 am - 5;30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Helal Algahaim can be reached at (571) 270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. PATRICK M. BRADY III Examiner Art Unit 3666 /PATRICK M BRADY/Examiner, Art Unit 3666 /HELAL A ALGAHAIM/SPE , Art Unit 3645
Read full office action

Prosecution Timeline

Dec 11, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594992
VEHICLE STEERING CONTROL DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591236
REMOTE SUPPORT SYSTEM AND REMOTE SUPPORT METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12589734
METHOD FOR DEALING WITH OBSTACLES IN AN INDUSTRIAL TRUCK
2y 5m to grant Granted Mar 31, 2026
Patent 12583517
VEHICLE STEERING CONTROL DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12577755
WORK MACHINE AND CONTROL SYSTEM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
99%
With Interview (+44.1%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 119 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month