Prosecution Insights
Last updated: April 19, 2026
Application No. 18/587,142

METHOD FOR DISPLAYING CONTENT OF MOBILITY BY IDENTIFICATION OF RIDING TARGET, AND APPARATUS IMPLEMENTING THE SAME

Non-Final OA §103§112
Filed
Feb 26, 2024
Examiner
GUO, XILIN
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Motov Co. Ltd.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
374 granted / 456 resolved
+20.0% vs TC avg
Strong +17% interview lift
Without
With
+17.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
18 currently pending
Career history
474
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 456 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 1 recites “A method for displaying content of a mobility by identification of a riding target, performed by a display device provided in the mobility, the method comprising: ....”. Further, the claim recites “... displaying content related to a riding target on a screen of the display device when the riding target registered by the user is identified using an image sensor mounted on the mobility ...”. The issue is persons of ordinary skill in the art reading the specification is not able to understand how to distinguish two of “a riding target”. Therefore, the examiner deems the claim indefinite as it fail to particularly point out and distinctly claim what Applicant regards as the invention. Therefore, the claim is rejected under U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. Dependent claims 2-11 are rejected because they depend upon independent claim 1. Dependent claim 11 depends upon independent claim 1 and recites “ ... executing an automatic dialing to the terminal of the riding target when the application for providing the service of the mobility is not installed on the terminal of the riding target ...”. However, claims 1 and 11 fail to describe “the application”. Therefore, the examiner deems the claim indefinite as it fail to particularly point out and distinctly claim what Applicant regards as the invention. Therefore, the claim is rejected under U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-7, 9, 12, 14-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over TSUKAMOTO et al (U.S. Patent Application Publication 2023/0294739 A1) in view of WELLBORN et al (U.S. Patent Application Publication 2017/0213308 A1). Regarding claim 1, TSUKAMOTO discloses a method for displaying content of a mobility by identification of a riding target, performed by a display device provided in the mobility, the method comprising: determining whether the mobility dispatched according to a call request of a user (FIG. 1; paragraph [0035], the vehicle 100 can start traveling from a first stop position toward a position where the user 130 rides the vehicle in accordance with utterance information from the communication device 120 or the like. As will be described later, the vehicle 100 acquires the utterance information of the user transmitted from the communication device 120 via a network 140; paragraph [0047], FIG. 3 is a block diagram of a control system of the vehicle 100. The vehicle 100 includes a control unit (ECU) 30 ...; paragraph [0052], a software configuration for the stop-position determination processing in the control unit 30 will be described with reference to FIG. 4 ...; paragraph [0065], FIG. 5A schematically illustrates a state where the user 130 calls the vehicle 100 stopped at a standby location by utterance in an area where the vehicle 100 can travel) has entered within a critical distance (Paragraph [0087], in S707, the control unit 30 determines whether the vehicle approaches the stop position. For example, the control unit 30 acquired current position information of the vehicle 100, and determines whether the position information is within a predetermined distance from the latitude and longitude determined as the stop position) with respect to a location specified by the user (Paragraph [0082], in S701, the control unit 30 receives the utterance information of the pick-up request and the position information of the user from the communication device 120 ...); and displaying content related to a riding target on FIG. 5B; paragraphs [0066]-[0067], the system status is indicated by visual information understanding 520 indicating a processing result of visual information ... In the visual information understanding 520, a display 524 of a recognized target is displayed in captured image information 523) when the riding target registered by the user (Paragraph [0056], a user data acquisition unit 413 acquires the utterance information and position information transmitted from the communication device 120. The user data acquisition unit 413 may store the acquired utterance information and position information in a database 403. As will be described later, the utterance information acquired by the user data acquisition unit 413 is input to a learned machine learning model in order to estimate the user's intention) is identified using an image sensor mounted on the mobility (FIG. 2A; paragraph [0045], each of the detection units 15 to 17 is an imaging device that captures an image of the surroundings of the vehicle 100 ...; FIG. 8A; paragraph [0090], in S801, the control unit 30 performs object recognition processing on the image acquired by the detection unit 15 ...), when it is determined that the mobility has entered within the critical distance (Paragraphs [0087]-[0088], in S707, the control unit 30 determines whether the vehicle approaches the stop position. For example, the control unit 30 acquired current position information of the vehicle 100, and determines whether the position information is within a predetermined distance from the latitude and longitude determined as the stop position. When the current position of the vehicle is within the predetermined distance from the stop position, the control unit 30 determines that the vehicle approaches the stop position and advances the processing to S708 ... In S708, the control unit 30 executes stop position adjustment processing using the relative position. Details of the stop position adjustment processing will be described later with reference to FIG. 8). However, TSUKAMOTO does not specifically disclose a screen of the display device; wherein the riding target and the user are different. In additional, WELLBORN discloses (Paragraphs [0016]-[0017], FIG. 1 is a simplified block diagram of an exemplary embodiment of an operating environment 100 ... The illustrated embodiment of the operating environment 100 includes, without limitation: the transportation system 102; at least one user device 104 ... The illustrated embodiment of the operating environment 100 includes, without limitation: the transportation system 102; at least one user device 104 ... The transportation system 102 may also include one or more backend server systems, which may be cloud-based, network-based, or resident at the particular campus or geographical location serviced by the transportation system 102. The backend system can communicate with the user devices 104 operated by passengers to schedule rides, dispatch the vehicles 103 ...; paragraph [0033], FIG. 3 is a flow chart that illustrates an exemplary embodiment of a passenger pickup and drop-off process 300 ...) a screen of the display device (Paragraphs [0023]-[0024], FIG. 2 is a block diagram of an exemplary embodiment of a hardware platform 200 suitable for use in the operating environment 100 ... the hardware platform 200 includes a display element 214 ...; paragraph [0031], the display element 214 is suitably configured to enable the hardware platform 200 to render and display various screens ... When the hardware platform 200 is implemented onboard a vehicle 103, the display element 214 can be integrated in an instrument panel, an instrument cluster, a head-up display, or the like); wherein the riding target and the user are different (Paragraph [0034], the process 300 receives and processes ride (pickup) requests for multiple and different passengers or parties of an autonomous vehicle based transportation system (task 302) ... a ride request can be created and sent from a user device 104, either by on behalf of the passenger). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the pick-up request method taught by TSUKAMOTO incorporate the teachings of WELLBORN, and applying the transportation system for riding request taught by WELLBORN to have a display screen integrated in a head-up display within a vehicle and provide the riding request for a passenger by another user. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify TSUKAMOTO according to the relied-upon teachings of WELLBORN to obtain the invention as specified in claim. Regarding claim 3, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 1), and TSUKAMOTO further disclose wherein the displaying of the content related to the riding target on the screen of the display device (FIG. 5B; paragraphs [0066]-[0067], the system status is indicated by visual information understanding 520 indicating a processing result of visual information ... In the visual information understanding 520, a display 524 of a recognized target is displayed in captured image information 523) when the riding target registered by the user is identified includes scanning objects located on roads and sidewalks (Paragraph [0071], FIG. 5D schematically illustrates an example of the system status in a state in which the interaction illustrated in FIG. 5C has progressed between the vehicle 100 and the user 130. The image information 523 indicates a recognition result 541 of recognizing a person waving a hand and a recognition result 540 of recognizing a vending machine as a target) using a first camera mounted on a front of the mobility (Paragraph [0042], FIG. 2B illustrates an internal configuration of the vehicle 100 ...; paragraph [0046], the two detection units 15 are arranged on front portions of the vehicle 100 in a state of being spaced apart from each other in a Y direction, and mainly detect targets in front of the vehicle 100) and a second camera mounted on a side surface of the mobility (Paragraph [0046], the detection units 16 are arranged on a left side portion and a right side portion of the vehicle 100, respectively, and mainly detect targets on sides of the vehicle 100). Regarding claim 4, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 3), and TSUKAMOTO further disclose wherein the displaying of the content related to the riding target on the screen of the display device when the riding target registered by the user is identified includes: scanning the objects using the first camera when the mobility has entered within a first critical distance with respect to the location specified by the user (Paragraph [0042], FIG. 2B illustrates an internal configuration of the vehicle 100 ...; paragraph [0046], the two detection units 15 are arranged on front portions of the vehicle 100 in a state of being spaced apart from each other in a Y direction, and mainly detect targets in front of the vehicle 100; paragraph [0045], the vehicle 100 includes detection units 15 that detect targets around the vehicle 100 ... The vehicle 100 can acquire a position (hereinafter, referred to as relative position) of a specific person or a specific target viewed from the coordinate system of vehicle 100 based on the image information obtained by the detection unit. The relative position can be indicated as, for example, a position of 10 m in front); and scanning the objects using the second camera when the mobility has entered within a second critical distance with respect to the location specified by the user (Paragraph [0046], the detection units 16 are arranged on a left side portion and a right side portion of the vehicle 100, respectively, and mainly detect targets on sides of the vehicle 100; paragraph [0045], the vehicle 100 includes detection units 17 that detect targets around the vehicle 100 ... The vehicle 100 can acquire a position (hereinafter, referred to as relative position) of a specific person or a specific target viewed from the coordinate system of vehicle 100 based on the image information obtained by the detection unit. The relative position can be indicated as, for example, a position of 1 m on the left), the second critical distance is shorter than the first critical distance (Paragraph [0045], a position of 1 m on the left and a position of 10 m in front), and the second camera is a camera facing a sidewalk side (Paragraph [0046], the detection units 16 are arranged on a left side portion and a right side portion of the vehicle 100). Regarding claim 5, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 1), and TSUKAMOTO further disclose wherein the displaying of the content related to the riding target on the screen of the display device (FIG. 5B; paragraphs [0066]-[0067], the system status is indicated by visual information understanding 520 indicating a processing result of visual information ... In the visual information understanding 520, a display 524 of a recognized target is displayed in captured image information 523) when the riding target registered by the user (Paragraph [0056], a user data acquisition unit 413 acquires the utterance information and position information transmitted from the communication device 120. The user data acquisition unit 413 may store the acquired utterance information and position information in a database 403. As will be described later, the utterance information acquired by the user data acquisition unit 413 is input to a learned machine learning model in order to estimate the user's intention) is identified includes identifying an object corresponding to an object image (Paragraph [0059], the target may include a pedestrian, a signboard, a sign, equipment installed outdoors such as a vending machine, building components such as a window and an entrance, a road, a vehicle, a two-wheeled vehicle, and the like included in the utterance information) and text information registered by the user (Paragraph [0056], user data acquisition unit 413 acquires the utterance information and position information transmitted from the communication device 120 ... However, information (instruction information) including the user's instruction is may be other information including the user's intention such as text information) among the objects scanned using the image sensor, as the riding target (Paragraph [0040], the place related to the visual mark includes, for example, a name of a target that can be identified from an image. The vehicle 100 receives, from the communication device 120, the utterance information (e.g., “Stop in front of the vending machine”) including a location related to a visual mark), and the text information includes information about an appearance and clothing of the riding target (Paragraph [0070], FIG. 5C illustrates a situation in which the user has approached to such an extent that the vehicle 100 can confirm the user 130 by the image information ... “wave your hand”...“wearing red clothes” ...; paragraph [0071], FIG. 5D schematically illustrates an example of the system status in a state in which the interaction illustrated in FIG. 5C has progressed between the vehicle 100 and the user 130. The image information 523 indicates a recognition result 541 of recognizing a person waving a hand and a recognition result 540 of recognizing a vending machine as a target). Regarding claim 6, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 5), and TSUKAMOTO further disclose wherein the identifying of the object corresponding to the object image (Paragraph [0059], the target may include a pedestrian, a signboard, a sign, equipment installed outdoors such as a vending machine, building components such as a window and an entrance, a road, a vehicle, a two-wheeled vehicle, and the like included in the utterance information) and text information registered by the user (FIGS. 1 and 4; paragraph [0056], user data acquisition unit 413 acquires the utterance information and position information transmitted from the communication device 120 ... However, information (instruction information) including the user's instruction is not limited to voice information, and may be other information including the user's intention such as text information) among the objects scanned using the image sensor (FIG. 2A; paragraph [0045], each of the detection units 15 to 17 is an imaging device that captures an image of the surroundings of the vehicle 100 ...; FIG. 8A; paragraph [0090], in S801, the control unit 30 performs object recognition processing on the image acquired by the detection unit 15 ...), as the riding target includes outputting an identification result of the riding target (Paragraph [0071], FIG. 5D schematically illustrates an example of the system status in a state in which the interaction illustrated in FIG. 5C has progressed between the vehicle 100 and the user 130. The image information 523 indicates a recognition result 541 of recognizing a person waving a hand and a recognition result 540 of recognizing a vending machine as a target) by inputting the object image and text information into an object recognition model based on an artificial intelligence algorithm (Paragraph [0056], a user data acquisition unit 413 acquires the utterance information and position information transmitted from the communication device 120. The user data acquisition unit 413 may store the acquired utterance information and position information in a database 403. As will be described later, the utterance information acquired by the user data acquisition unit 413 is input to a learned machine learning model in order to estimate the user's intention). Regarding claim 7, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 1), and TSUKAMOTO further disclose wherein the displaying of the contents related to the riding target on the screen of the display device (FIG. 5B; paragraphs [0066]-[0067], the system status is indicated by visual information understanding 520 indicating a processing result of visual information ... In the visual information understanding 520, a display 524 of a recognized target is displayed in captured image information 523; paragraph [0071], FIG. 5D schematically illustrates an example of the system status in a state in which the interaction illustrated in FIG. 5C has progressed between the vehicle 100 and the user 130. The image information 523 indicates a recognition result 541 of recognizing a person waving a hand and a recognition result 540 of recognizing a vending machine as a target) includes displaying riding target display information preset by the user on the screen in response to the riding target being identified (Paragraph [0058], a pick-up request ... the utterance information such as “in front of the vending machine”; paragraph [0070], ... waving his or her hand ...). Regarding claim 9, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 1), and TSUKAMOTO further disclose wherein the displaying of the contents related to the riding target on the screen of the display device includes: transmitting arrival notification information to a terminal of the user in response to the riding target being identified (Paragraph [0070], FIG. 5C illustrates a situation in which the user has approached to such an extent that the vehicle 100 can confirm the user 130 by the image information ... Since the relative position is determined using the image information, the vehicle 100 performs an interaction for confirming whether a person in an image is the user 130. For example, the vehicle 100 transmits utterance information 530 of “I am close now. Could you please wave your hand?” ...); and transmitting riding notification information to the terminal of the user in response to the riding of the riding target being completed (Paragraph [0074], FIG. 5G illustrates an example of interaction between the vehicle 100 and the user 130 when the vehicle 100 approaches the position of the vending machine 500 ... when the vehicle approaches the relative position, the vehicle transmits utterance information 570 of “Sorry to keep you waiting” to the communication device 120. The vehicle 100 stops at the currently set stop position. When the vehicle 100 receives utterance information 571 of “Thank you” by the user 130, the vehicle 100 can determine that allocation is completed). Regarding claim 12, TSUKAMOTO discloses a display device provided in a mobility, the display device comprising: a network interface configured to communicate with an external device (FIG. 1; paragraph [0035], the vehicle 100 can start traveling from a first stop position toward a position where the user 130 rides the vehicle in accordance with utterance information from the communication device 120 or the like. As will be described later, the vehicle 100 acquires the utterance information of the user transmitted from the communication device 120 via a network 140); a display configured to display an image (Paragraphs [0066]-[0067], FIG. 5B schematically illustrates an example of a system status after the vehicle 100 acquires the utterance information 510 of “Can you come soon?” and performs the voice information processing and the like. As an example, the system status is indicated by visual information understanding 520 indicating a processing result of visual information ... In the visual information understanding 520, a display 524 of a recognized target is displayed in captured image information 523); one or more processors (Paragraphs [0047]-[0048], FIG. 3 is a block diagram of a control system of the vehicle 100. The vehicle 100 includes a control unit (ECU) 30 ... the control unit 30 may further include, as a processor, a graphical processing unit (GPU)); a memory configured to load a computer program executed by the processor (Paragraph [0047], the control unit 30 includes a processor represented by a central processing unit (CPU), a storage device such as a semiconductor memory, an interface with an external device, and the like. In the storage device, programs executed by the processor, data used for processing by the processor, and the like are stored); and a storage configured to store the computer program, wherein the computer program includes instructions for performing (Paragraph [0052], a software configuration for the stop-position determination processing in the control unit 30 will be described with reference to FIG. 4. The present software configuration is implemented by the control unit 30 executing a program stored in a non-transitory computer-readable storage medium): an operation of determining whether the mobility dispatched according to a call request of a user (Paragraph [0047], FIG. 3 is a block diagram of a control system of the vehicle 100. The vehicle 100 includes a control unit (ECU) 30 ...; paragraph [0052], a software configuration for the stop-position determination processing in the control unit 30 will be described with reference to FIG. 4 ...; paragraph [0065], FIG. 5A schematically illustrates a state where the user 130 calls the vehicle 100 stopped at a standby location by utterance in an area where the vehicle 100 can travel) has entered within a critical distance (Paragraph [0087], in S707, the control unit 30 determines whether the vehicle approaches the stop position. For example, the control unit 30 acquired current position information of the vehicle 100, and determines whether the position information is within a predetermined distance from the latitude and longitude determined as the stop position) with respect to a location specified by the user (Paragraph [0082], in S701, the control unit 30 receives the utterance information of the pick-up request and the position information of the user from the communication device 120 ...); and an operation of displaying content related to a riding target on FIG. 5B; paragraphs [0066]-[0067], the system status is indicated by visual information understanding 520 indicating a processing result of visual information ... In the visual information understanding 520, a display 524 of a recognized target is displayed in captured image information 523) when the riding target registered by the user (Paragraph [0056], a user data acquisition unit 413 acquires the utterance information and position information transmitted from the communication device 120. The user data acquisition unit 413 may store the acquired utterance information and position information in a database 403. As will be described later, the utterance information acquired by the user data acquisition unit 413 is input to a learned machine learning model in order to estimate the user's intention) is identified using an image sensor mounted on the mobility (FIG. 2A; paragraph [0045], each of the detection units 15 to 17 is an imaging device that captures an image of the surroundings of the vehicle 100 ...; FIG. 8A; paragraph [0090], in S801, the control unit 30 performs object recognition processing on the image acquired by the detection unit 15 ...), when it is determined that the mobility has entered within the critical distance (Paragraphs [0087]-[0088], in S707, the control unit 30 determines whether the vehicle approaches the stop position. For example, the control unit 30 acquired current position information of the vehicle 100, and determines whether the position information is within a predetermined distance from the latitude and longitude determined as the stop position. When the current position of the vehicle is within the predetermined distance from the stop position, the control unit 30 determines that the vehicle approaches the stop position and advances the processing to S708 ... In S708, the control unit 30 executes stop position adjustment processing using the relative position. Details of the stop position adjustment processing will be described later with reference to FIG. 8). However, TSUKAMOTO does not specifically disclose a screen of the display device; the riding target and the user are different. In additional, WELLBORN discloses (Paragraphs [0016]-[0017], FIG. 1 is a simplified block diagram of an exemplary embodiment of an operating environment 100 ... The illustrated embodiment of the operating environment 100 includes, without limitation: the transportation system 102; at least one user device 104 ... The illustrated embodiment of the operating environment 100 includes, without limitation: the transportation system 102; at least one user device 104 ... The transportation system 102 may also include one or more backend server systems, which may be cloud-based, network-based, or resident at the particular campus or geographical location serviced by the transportation system 102. The backend system can communicate with the user devices 104 operated by passengers to schedule rides, dispatch the vehicles 103 ...; paragraph [0033], FIG. 3 is a flow chart that illustrates an exemplary embodiment of a passenger pickup and drop-off process 300 ...) a screen of the display device (Paragraphs [0023]-[0024], FIG. 2 is a block diagram of an exemplary embodiment of a hardware platform 200 suitable for use in the operating environment 100 ... the hardware platform 200 includes a display element 214 ...; paragraph [0031], the display element 214 is suitably configured to enable the hardware platform 200 to render and display various screens ... When the hardware platform 200 is implemented onboard a vehicle 103, the display element 214 can be integrated in an instrument panel, an instrument cluster, a head-up display, or the like); the riding target and the user are different (Paragraph [0034], the process 300 receives and processes ride (pickup) requests for multiple and different passengers or parties of an autonomous vehicle based transportation system (task 302) ... a ride request can be created and sent from a user device 104, either by on behalf of the passenger). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the pick-up request method taught by TSUKAMOTO incorporate the teachings of WELLBORN, and applying the transportation system for riding request taught by WELLBORN to have a display screen integrated in a head-up display within a vehicle and provide the riding request for a passenger by another user. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify TSUKAMOTO according to the relied-upon teachings of WELLBORN to obtain the invention as specified in claim. Regarding claim 14, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 12), and TSUKAMOTO further disclose wherein the operation of displaying of the content related to the riding target on the screen of the display device (FIG. 5B; paragraphs [0066]-[0067], the system status is indicated by visual information understanding 520 indicating a processing result of visual information ... In the visual information understanding 520, a display 524 of a recognized target is displayed in captured image information 523) when the riding target registered by the user is identified includes an operation of scanning objects located on roads and sidewalks (Paragraph [0071], FIG. 5D schematically illustrates an example of the system status in a state in which the interaction illustrated in FIG. 5C has progressed between the vehicle 100 and the user 130. The image information 523 indicates a recognition result 541 of recognizing a person waving a hand and a recognition result 540 of recognizing a vending machine as a target) using a first camera mounted on a front of the mobility (Paragraph [0042], FIG. 2B illustrates an internal configuration of the vehicle 100 ...; paragraph [0046], the two detection units 15 are arranged on front portions of the vehicle 100 in a state of being spaced apart from each other in a Y direction, and mainly detect targets in front of the vehicle 100) and a second camera mounted on a side surface of the mobility (Paragraph [0046], the detection units 16 are arranged on a left side portion and a right side portion of the vehicle 100, respectively, and mainly detect targets on sides of the vehicle 100). Regarding claim 15, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 14), and TSUKAMOTO further disclose wherein the operation of displaying of the content related to the riding target on the screen of the display device when the riding target registered by the user is identified includes: an operation of scanning the objects using the first camera when the mobility has entered within a first critical distance with respect to the location specified by the user (Paragraph [0042], FIG. 2B illustrates an internal configuration of the vehicle 100 ...; paragraph [0046], the two detection units 15 are arranged on front portions of the vehicle 100 in a state of being spaced apart from each other in a Y direction, and mainly detect targets in front of the vehicle 100; paragraph [0045], the vehicle 100 includes detection units 15 that detect targets around the vehicle 100 ... The vehicle 100 can acquire a position (hereinafter, referred to as relative position) of a specific person or a specific target viewed from the coordinate system of vehicle 100 based on the image information obtained by the detection unit. The relative position can be indicated as, for example, a position of 10 m in front); and an operation of scanning the objects using the second camera when the mobility has entered within a second critical distance with respect to the location specified by the user (Paragraph [0046], the detection units 16 are arranged on a left side portion and a right side portion of the vehicle 100, respectively, and mainly detect targets on sides of the vehicle 100; paragraph [0045], the vehicle 100 includes detection units 17 that detect targets around the vehicle 100 ... The vehicle 100 can acquire a position (hereinafter, referred to as relative position) of a specific person or a specific target viewed from the coordinate system of vehicle 100 based on the image information obtained by the detection unit. The relative position can be indicated as, for example, a position of 1 m on the left), the second critical distance is shorter than the first critical distance (Paragraph [0045], a position of 1 m on the left and a position of 10 m in front), and the second camera is a camera facing a sidewalk side (Paragraph [0046], the detection units 16 are arranged on a left side portion and a right side portion of the vehicle 100). Regarding claim 16, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 12), and TSUKAMOTO further disclose wherein the operation of displaying of the content related to the riding target on the screen of the display device (FIG. 5B; paragraphs [0066]-[0067], the system status is indicated by visual information understanding 520 indicating a processing result of visual information ... In the visual information understanding 520, a display 524 of a recognized target is displayed in captured image information 523) when the riding target registered by the user (Paragraph [0056], a user data acquisition unit 413 acquires the utterance information and position information transmitted from the communication device 120. The user data acquisition unit 413 may store the acquired utterance information and position information in a database 403. As will be described later, the utterance information acquired by the user data acquisition unit 413 is input to a learned machine learning model in order to estimate the user's intention) is identified includes an operation of identifying an object corresponding to an object image (Paragraph [0059], the target may include a pedestrian, a signboard, a sign, equipment installed outdoors such as a vending machine, building components such as a window and an entrance, a road, a vehicle, a two-wheeled vehicle, and the like included in the utterance information) and text information registered by the user (Paragraph [0056], user data acquisition unit 413 acquires the utterance information and position information transmitted from the communication device 120 ... However, information (instruction information) including the user's instruction is may be other information including the user's intention such as text information) among the objects scanned using the image sensor, as the riding target (Paragraph [0040], the place related to the visual mark includes, for example, a name of a target that can be identified from an image. The vehicle 100 receives, from the communication device 120, the utterance information (e.g., “Stop in front of the vending machine”) including a location related to a visual mark), and the text information includes information about an appearance and clothing of the riding target (Paragraph [0070], FIG. 5C illustrates a situation in which the user has approached to such an extent that the vehicle 100 can confirm the user 130 by the image information ... “wave your hand”...“wearing red clothes” ...; paragraph [0071], FIG. 5D schematically illustrates an example of the system status in a state in which the interaction illustrated in FIG. 5C has progressed between the vehicle 100 and the user 130. The image information 523 indicates a recognition result 541 of recognizing a person waving a hand and a recognition result 540 of recognizing a vending machine as a target). Regarding claim 17, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 16), and TSUKAMOTO further disclose wherein the operation of identifying of the object corresponding to the object image (Paragraph [0059], the target may include a pedestrian, a signboard, a sign, equipment installed outdoors such as a vending machine, building components such as a window and an entrance, a road, a vehicle, a two-wheeled vehicle, and the like included in the utterance information) and text information registered by the user (FIGS. 1 and 4; paragraph [0056], user data acquisition unit 413 acquires the utterance information and position information transmitted from the communication device 120 ... However, information (instruction information) including the user's instruction is not limited to voice information, and may be other information including the user's intention such as text information) among the objects scanned using the image sensor (FIG. 2A; paragraph [0045], each of the detection units 15 to 17 is an imaging device that captures an image of the surroundings of the vehicle 100 ...; FIG. 8A; paragraph [0090], in S801, the control unit 30 performs object recognition processing on the image acquired by the detection unit 15 ...), as the riding target includes an operation of outputting an identification result of the riding target (Paragraph [0071], FIG, 5D schematically illustrates an example of the system status in a state in which the interaction illustrated in FIG. 5C has progressed between the vehicle 100 and the user 130. The image information 523 indicates a recognition result 541 of recognizing a person waving a hand and a recognition result 540 of recognizing a vending machine as a target) by inputting the object image and text information into an object recognition model based on an artificial intelligence algorithm (Paragraph [0056], a user data acquisition unit 413 acquires the utterance information and position information transmitted from the communication device 120. The user data acquisition unit 413 may store the acquired utterance information and position information in a database 403. As will be described later, the utterance information acquired by the user data acquisition unit 413 is input to a learned machine learning model in order to estimate the user's intention). Regarding claim 18, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 12), and TSUKAMOTO further disclose wherein the operation of displaying of the contents related to the riding target on the screen of the display device (FIG. 5B; paragraphs [0066]-[0067], the system status is indicated by visual information understanding 520 indicating a processing result of visual information ... In the visual information understanding 520, a display 524 of a recognized target is displayed in captured image information 523; paragraph [0071], FIG. 5D schematically illustrates an example of the system status in a state in which the interaction illustrated in FIG. 5C has progressed between the vehicle 100 and the user 130. The image information 523 indicates a recognition result 541 of recognizing a person waving a hand and a recognition result 540 of recognizing a vending machine as a target) includes an operation of displaying riding target display information preset by the user on the screen in response to the riding target being identified (Paragraph [0058], a pick-up request ... the utterance information such as “in front of the vending machine”; paragraph [0070], ... waving his or her hand ...). Regarding claim 20, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 12), and TSUKAMOTO further disclose wherein the operation of displaying of the contents related to the riding target on the screen of the display device includes: an operation of transmitting arrival notification information to a terminal of the user in response to the riding target being identified (Paragraph [0070], FIG. 5C illustrates a situation in which the user has approached to such an extent that the vehicle 100 can confirm the user 130 by the image information ... Since the relative position is determined using the image information, the vehicle 100 performs an interaction for confirming whether a person in an image is the user 130. For example, the vehicle 100 transmits utterance information 530 of “I am close now. Could you please wave your hand?” ...); and an operation of transmitting riding notification information to the terminal of the user in response to the riding of the riding target being completed (Paragraph [0074], FIG. 5G illustrates an example of interaction between the vehicle 100 and the user 130 when the vehicle 100 approaches the position of the vending machine 500 ... when the vehicle approaches the relative position, the vehicle transmits utterance information 570 of “Sorry to keep you waiting” to the communication device 120. The vehicle 100 stops at the currently set stop position. When the vehicle 100 receives utterance information 571 of “Thank you” by the user 130, the vehicle 100 can determine that allocation is completed). Claims 2 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over TSUKAMOTO et al (U.S. Patent Application Publication 2023/0294739 A1) in view of WELLBORN et al (U.S. Patent Application Publication 2017/0213308 A1) in view of Fujimoto (U.S. Patent Application Publication 2018/0129981 A1). Regarding claim 2, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 1). However, TSUKAMOTO does not specifically disclose wherein the determining of whether the mobility has entered within the critical distance includes: activating a Bluetooth function of the mobility when it is determined that the mobility has entered within the critical distance with respect to the location specified by the user based on location information of the mobility; and activating the image sensor when a terminal of wirelessly communicating data with the mobility through the Bluetooth function is recognized. In additional, Fujimoto discloses (Abstract, a vehicle control system includes an output unit configured to output information to the outside of a vehicle, an in-vehicle status acquisition unit configured to acquire a status inside the vehicle, and a control unit configured to cause the output unit to output information as to whether or not it is possible to ride in the vehicle on the basis of in-vehicle information acquired by the in-vehicle status acquisition unit) wherein the determining of whether the mobility has entered within the critical distance (Paragraphs [0049]-[0053], FIG. 1 is a configuration diagram of a vehicle system 1 ... The radar device 12 radiates radio waves such as millimeter waves around the vehicle M and detects (reflected) radio waves reflected by an object to detect at least the position (distance and orientation) of the object ...) includes: activating a Bluetooth function of the mobility (FIG. 7; paragraph [0113], each of the terminal devices 400 is, for example, a smartphone or a tablet terminal. The terminal device 400 has a function of communicating with the vehicle M present around the terminal device 400 using Bluetooth) when it is determined that the mobility has entered within the critical distance with respect to the location specified by the user based on location information of the mobility (Paragraph [0075], when the vehicle M approaches a predetermined distance ...); and activating the image sensor (Paragraph [0115], the ride seeker determination unit 170 determines that a person P4 recognized by the external environment recognition unit 121 is a ride seeker, for example, upon receiving a notification including information indicating a ride seeker from the terminal device 400-1 of the person P4 outside the vehicle. In the example of FIG. 7; paragraph [0052], one or a plurality of cameras 10 may be attached to the vehicle M, on which the vehicle system 1 is mounted, at arbitrary locations thereof. For imaging the area in front of the vehicle, a camera 10 is attached to an upper portion of a front windshield ...) when a terminal of wirelessly communicating data with the mobility through the Bluetooth function is recognized (Paragraph [0115], the person P4 uses the terminal device 400-1 to output a signal indicating that he or she is a ride seeker to a surrounding area. The surrounding area is a communication range defined by a communication standard. The vehicle M receives a signal from the terminal device 400-1 through the communication device 20. The ride seeker determination unit 170 recognizes the person near the vehicle M by the external environment recognition unit 121 on the basis of the signal received from the terminal device 400-1 and determines that the person P4 is a ride seeker). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the mobile vehicle taught by TSUKAMOTO in view of WELLBORN incorporate the teachings of Fujimoto, and applying the vehicle control system taught by Fujimoto to implement Bluetooth function within the mobile vehicle in order to provide communication between the vehicle and the user terminal. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify TSUKAMOTO in view of WELLBORN according to the relied-upon teachings of Fujimoto to obtain the invention as specified in claim. Regarding claim 13, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 12). However, TSUKAMOTO does not specifically disclose wherein the operation of determining of whether the mobility has entered within the critical distance includes: an operation of activating a Bluetooth function of the mobility when it is determined that the mobility has entered within the critical distance with respect to the location specified by the user based on location information of the mobility; and an operation of activating the image sensor when a terminal of wirelessly communicating data with the mobility through the Bluetooth function is recognized. In additional, Fujimoto discloses (Abstract, a vehicle control system includes an output unit configured to output information to the outside of a vehicle, an in-vehicle status acquisition unit configured to acquire a status inside the vehicle, and a control unit configured to cause the output unit to output information as to whether or not it is possible to ride in the vehicle on the basis of in-vehicle information acquired by the in-vehicle status acquisition unit) wherein the operation of determining of whether the mobility has entered within the critical distance (Paragraphs [0049]-[0053], FIG. 1 is a configuration diagram of a vehicle system 1 ... The radar device 12 radiates radio waves such as millimeter waves around the vehicle M and detects (reflected) radio waves reflected by an object to detect at least the position (distance and orientation) of the object ...) includes: an operation of activating a Bluetooth function of the mobility (FIG. 7; paragraph [0113], each of the terminal devices 400 is, for example, a smartphone or a tablet terminal. The terminal device 400 has a function of communicating with the vehicle M present around the terminal device 400 using Bluetooth) when it is determined that the mobility has entered within the critical distance with respect to the location specified by the user based on location information of the mobility (Paragraph [0075], when the vehicle M approaches a predetermined distance ...); and an operation of activating the image sensor (Paragraph [0115], the ride seeker determination unit 170 determines that a person P4 recognized by the external environment recognition unit 121 is a ride seeker, for example, upon receiving a notification including information indicating a ride seeker from the terminal device 400-1 of the person P4 outside the vehicle. In the example of FIG. 7; paragraph [0052], one or a plurality of cameras 10 may be attached to the vehicle M, on which the vehicle system 1 is mounted, at arbitrary locations thereof. For imaging the area in front of the vehicle, a camera 10 is attached to an upper portion of a front windshield ...) when a terminal of wirelessly communicating data with the mobility through the Bluetooth function is recognized (Paragraph [0115], the person P4 uses the terminal device 400-1 to output a signal indicating that he or she is a ride seeker to a surrounding area. The surrounding area is a communication range defined by a communication standard. The vehicle M receives a signal from the terminal device 400-1 through the communication device 20. The ride seeker determination unit 170 recognizes the person near the vehicle M by the external environment recognition unit 121 on the basis of the signal received from the terminal device 400-1 and determines that the person P4 is a ride seeker). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the mobile vehicle taught by TSUKAMOTO in view of WELLBORN incorporate the teachings of Fujimoto, and applying the vehicle control system taught by Fujimoto to implement Bluetooth function within the mobile vehicle in order to provide communication between the vehicle and the user terminal. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify TSUKAMOTO in view of WELLBORN according to the relied-upon teachings of Fujimoto to obtain the invention as specified in claim. Claims 8 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over TSUKAMOTO et al (U.S. Patent Application Publication 2023/0294739 A1) in view of WELLBORN et al (U.S. Patent Application Publication 2017/0213308 A1) in view of Yasui et al (U.S. Patent Application Publication 2020/0104881 A1). Regarding claim 8, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 7). However, TSUKAMOTO does not specifically disclose wherein the displaying of the riding target display information preset by the user on the screen includes displaying a notification message including at least portion of profile information of the riding target registered by the user on the screen. In additional, Yasui discloses (Abstract, a vehicle control system includes an outside display provided on the exterior of a vehicle and configured to display content toward the outside of the vehicle, a vehicle environment acquirer configured to acquire an environment of the vehicle, and a display controller configured to control display of the content on the outside display on the basis of the environment of the vehicle acquired by the vehicle environment acquirer) wherein the displaying of the riding target display information preset by the user (Paragraph [0197], information (e.g., a name, a nickname, a reservation number and the like) representing a user who has reserved use of the vehicle 200) on the screen (Paragraph [0058], FIG. 2 is a configuration diagram of the vehicle 200 ... an outside display 272) includes displaying a notification message including at least portion of profile information of the riding target registered by the user on the screen (Paragraph [0197], FIG. 23 is a perspective view of the vehicle 200 having the outside display 272 on which information on the user who has reserved use of the vehicle is displayed ...). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the mobile vehicle taught by TSUKAMOTO in view of WELLBORN incorporate the teachings of Yasui, and applying the vehicle control system taught by Yasui to display a notification message including the information of the user who has reserved use of the vehicle on the screen. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify TSUKAMOTO in view of WELLBORN according to the relied-upon teachings of Yasui to obtain the invention as specified in claim. Regarding claim 19, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 15). However, TSUKAMOTO does not specifically disclose wherein the operation of displaying of the riding target display information preset by the user on the screen includes an operation of displaying a notification message including at least portion of profile information of the riding target registered by the user on the screen. In additional, Yasui discloses (Abstract, a vehicle control system includes an outside display provided on the exterior of a vehicle and configured to display content toward the outside of the vehicle, a vehicle environment acquirer configured to acquire an environment of the vehicle, and a display controller configured to control display of the content on the outside display on the basis of the environment of the vehicle acquired by the vehicle environment acquirer) wherein the operation of displaying of the riding target display information preset by the user (Paragraph [0197], information (e.g., a name, a nickname, a reservation number and the like) representing a user who has reserved use of the vehicle 200) on the screen (Paragraph [0058], FIG. 2 is a configuration diagram of the vehicle 200 ... an outside display 272) includes an operation of displaying a notification message including at least portion of profile information of the riding target registered by the user on the screen (Paragraph [0197], FIG. 23 is a perspective view of the vehicle 200 having the outside display 272 on which information on the user who has reserved use of the vehicle is displayed ...). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the mobile vehicle taught by TSUKAMOTO in view of WELLBORN incorporate the teachings of Yasui, and applying the vehicle control system taught by Yasui to display a notification message including the information of the user who has reserved use of the vehicle on the screen. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify TSUKAMOTO in view of WELLBORN according to the relied-upon teachings of Yasui to obtain the invention as specified in claim. Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over TSUKAMOTO et al (U.S. Patent Application Publication 2023/0294739 A1) in view of WELLBORN et al (U.S. Patent Application Publication 2017/0213308 A1) in view of Lines (U.S. Patent Application Publication 2020/0380533 A1). Regarding claim 10, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 1), and TSUKAMOTO further disclose wherein the displaying of the contents related to the riding target on the screen of the display device (FIG. 5B; paragraphs [0066]-[0067], the system status is indicated by visual information understanding 520 indicating a processing result of visual information ... In the visual information understanding 520, a display 524 of a recognized target is displayed in captured image information 523; paragraph [0071], FIG. 5D schematically illustrates an example of the system status in a state in which the interaction illustrated in FIG. 5C has progressed between the vehicle 100 and the user 130. The image information 523 indicates a recognition result 541 of recognizing a person waving a hand and a recognition result 540 of recognizing a vending machine as a target) includes transmitting arrival information of the mobility to a terminal of the riding target (Paragraph [0074], FIG. 5G illustrates an example of interaction between the vehicle 100 and the user 130 when the vehicle 100 approaches the position of the vending machine 500 ... when the vehicle approaches the relative position, the vehicle transmits utterance information 570 of “Sorry to keep you waiting” to the communication device 120. The vehicle 100 stops at the currently set stop position. When the vehicle 100 receives utterance information 571 of “Thank you” by the user 130, the vehicle 100 can determine that allocation is completed). However, TSUKAMOTO does not specifically disclose transmitting arrival information of the mobility to a terminal of the riding target to be displayed through a push notification of an application, when the application for providing a service of the mobility is installed on the terminal of the riding target. In additional, Lines discloses (Abstract, an example driver verification method includes transmitting a ride request from a first mobile device of a user to a server of a ride hailing service, and receiving a first credential at the first mobile device from the server ...; FIG. 1; paragraph [0032], the user 12 utilizes their mobile device 14 to submit a ride request to server 30 through the WAN 32 using the first type of signaling. The server 30 communicates with the drivers 22 of the vehicles 20 via the driver's mobile devices 24, and assigns the ride request to one of the drivers 22) transmitting arrival information of the mobility to a terminal of the riding target to be displayed (FIG. 3; paragraph [0044], when the driver 22 is within a predefined distance of the user 12 (step 114). The server 30 also provides a notification to the user 12 when the driver 22 has arrived at the pickup location specified in the ride request (step 116). Based on the notifications of steps 114 and 116, the mobile device 14 provides corresponding alerts to the user 12, such as display changes on the phone, etc. Lines filed the application on May 29, 2000. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to understand that the phone described by Lines includes a screen for displaying information and images) through a push notification of an application (Paragraph [0003], these ride hailing services allow users to use their smartphones to operate a downloadable application that shares their location and requests a ride from nearby drivers; paragraph [0043], transmit data between the server 30 and the mobile device 14 of user 12; paragraph [0044], the server 30 also provides a notification to the user 12 when the driver 22 has arrived at the pickup location specified in the ride request (step 116). Lines discloses “the request is transmitted from a client application downloaded to the mobile device” and a notification is transmitted to the user mobile device when the driver has arrived at the pickup location specified in the ride request. Thus, the broadest reasonable interpretation of “transmit notification” and “push notification” typically can be considered equivalent), when the application for providing a service of the mobility is installed on the terminal of the riding target (Paragraph [0042], a user 12 utilizes their mobile device 14 to transmit a ride request to server 30 (step 102). In one example, the request is transmitted from a client application downloaded to the mobile device 14 ...). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the mobile vehicle taught by TSUKAMOTO in view of WELLBORN incorporate the teachings of Lines, and applying the ride hailing service taught by Lines to provide a downloadable application for each user mobile device; allow users to request a ride from nearby drivers and receive a notification when the driver has arrived at the pickup location specified in the ride request by using the downloaded application. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify TSUKAMOTO in view of WELLBORN according to the relied-upon teachings of Lines to obtain the invention as specified in claim. Regarding claim 11, the combination of TSUKAMOTO in view of WELLBORN discloses everything claimed as applied above (see claim 1), and TSUKAMOTO further disclose wherein the displaying of the contents related to the riding target on the screen of the display device (FIG. 5B; paragraphs [0066]-[0067], the system status is indicated by visual information understanding 520 indicating a processing result of visual information ... In the visual information understanding 520, a display 524 of a recognized target is displayed in captured image information 523; paragraph [0071], FIG. 5D schematically illustrates an example of the system status in a state in which the interaction illustrated in FIG. 5C has progressed between the vehicle 100 and the user 130. The image information 523 indicates a recognition result 541 of recognizing a person waving a hand and a recognition result 540 of recognizing a vending machine as a target) includes: executing an automatic dialing to the terminal of the riding target when the application for providing the service of the mobility is not installed on the terminal of the riding target (Paragraph [0065], FIG. 5A schematically illustrates a state where the user 130 calls the vehicle 100 stopped at a standby location by utterance in an area where the vehicle 100 can travel ...; FIG. 3 shows the control unit 30 of the vehicle 100; FIG. 8B; paragraph [0107], the control unit 30 ends the stop position adjustment processing using the relative position, and returns to a calling-source processing); and transmitting arrival information of the mobility to the terminal of the riding target (Paragraph [0074], FIG. 5G illustrates an example of interaction between the vehicle 100 and the user 130 when the vehicle 100 approaches the position of the vending machine 500 ... when the vehicle approaches the relative position, the vehicle transmits utterance information 570 of “Sorry to keep you waiting” to the communication device 120. The vehicle 100 stops at the currently set stop position. When the vehicle 100 receives utterance information 571 of “Thank you” by the user 130, the vehicle 100 can determine that allocation is completed), when the display device is connected to the terminal of the riding target through the automatic dialing (Paragraph [0065], the user 130 calls the vehicle 100 stopped at a standby location by utterance in an area where the vehicle 100 can travel ...; paragraph [0107], the control unit 30 ends the stop position adjustment processing using the relative position, and returns to a calling-source processing). However, TSUKAMOTO does not specifically disclose the terminal of the riding target to be displayed on a screen of the terminal of the riding target. In additional, Lines discloses (Abstract, an example driver verification method includes transmitting a ride request from a first mobile device of a user to a server of a ride hailing service, and receiving a first credential at the first mobile device from the server ...; FIG. 1; paragraph [0032], the user 12 utilizes their mobile device 14 to submit a ride request to server 30 through the WAN 32 using the first type of signaling. The server 30 communicates with the drivers 22 of the vehicles 20 via the driver's mobile devices 24, and assigns the ride request to one of the drivers 22) the terminal of the riding target to be displayed on a screen of the terminal of the riding target (FIG. 3; paragraph [0044], when the driver 22 is within a predefined distance of the user 12 (step 114). The server 30 also provides a notification to the user 12 when the driver 22 has arrived at the pickup location specified in the ride request (step 116). Based on the notifications of steps 114 and 116, the mobile device 14 provides corresponding alerts to the user 12, such as display changes on the phone, etc. Lines filed the application on May 29, 2000. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to understand that the phone described by Lines includes a screen for displaying information and images). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the mobile vehicle taught by TSUKAMOTO in view of WELLBORN incorporate the teachings of Lines, and applying the ride hailing service taught by Lines to provide a display screen for each user mobile device; allow the mobile device to display the arrival information when the driver has arrived at the pickup location specified in the ride request. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify TSUKAMOTO in view of WELLBORN according to the relied-upon teachings of Lines to obtain the invention as specified in claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Xilin Guo whose telephone number is (571)272-5786. The examiner can normally be reached Monday - Friday 9:00 AM-5:30 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XILIN GUO/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Feb 26, 2024
Application Filed
Dec 31, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602855
LIVE MODEL PROMPTING AND REAL-TIME OUTPUT OF PHOTOREAL SYNTHETIC CONTENT
2y 5m to grant Granted Apr 14, 2026
Patent 12597403
DISPLAY DEVICE FOR A VEHICLE
2y 5m to grant Granted Apr 07, 2026
Patent 12579712
ASSET CREATION USING GENERATIVE ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Mar 17, 2026
Patent 12579766
SYSTEM AND METHOD FOR RAPID OUTFIT VISUALIZATION
2y 5m to grant Granted Mar 17, 2026
Patent 12573121
Automated Generation and Presentation of Sign Language Avatars for Video Content
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+17.4%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 456 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month