Prosecution Insights
Last updated: April 19, 2026
Application No. 18/671,409

MODIFYING PEDESTRIAN PERCEPTION OF A VEHICLE

Final Rejection §103
Filed
May 22, 2024
Examiner
ADEDIRAN, ABDUL-SAMAD A
Art Unit
2621
Tech Center
2600 — Communications
Assignee
Toyota Motor Engineering & Manufacturing North America, Inc.
OA Round
3 (Final)
78%
Grant Probability
Favorable
4-5
OA Rounds
2y 1m
To Grant
92%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
481 granted / 617 resolved
+16.0% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
22 currently pending
Career history
639
Total Applications
across all art units

Statute-Specific Performance

§101
1.8%
-38.2% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
19.5%
-20.5% vs TC avg
§112
29.0%
-11.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 617 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on December 24, 2025 has been entered and considered by the Examiner. Claim Objections Claim 1 is objected to because of the following informalities: Claim 1 recites limitations “the eye trajectory of the pedestrian” and “the direction that is opposite” in eighth line and eighth thru twelfth lines of the claim, but it is unclear at least because the claim uses terms “the eye trajectory of the pedestrian” and “the direction that is opposite” for a first time without previously reciting the terms in the claim. Therefore, Examiner suggests the limitations “the eye trajectory of the pedestrian” and “the direction that is opposite” should be amended, without adding new matter, in a manner that resolves the antecedent basis issues. Accordingly, any claims dependent on claim 1 are objected to based on same above reasoning. Claim 11 is objected to because of the following informalities: Claim 1 recites limitations “the eye trajectory of the pedestrian” and “the direction that is opposite” in tenth line and tenth thru thirteenth lines of the claim, but it is unclear at least because the claim uses terms “the eye trajectory of the pedestrian” and “the direction that is opposite” for a first time without previously reciting the terms in the claim. Therefore, Examiner suggests the limitations “the eye trajectory of the pedestrian” and “the direction that is opposite” should be amended, without adding new matter, in a manner that resolves the antecedent basis issues. Accordingly, any claims dependent on claim 11 are objected to based on same above reasoning. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6-8, 11-14, 17-19, and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Shalev-Schwartz et al., U.S. Patent Application Publication 2019/0283746 A1 (hereinafter Schwartz), in view of Kitayama et al., U.S. Patent Application Publication 2019/0213887 A1 (hereinafter Kitayama), and Chan, U.S. Patent Application Publication 2021/0118303 A1 (hereinafter Chan). Regarding claim 1, Schwartz teaches a method, comprising: detecting a pedestrian in an environment of a vehicle (2700, 2620, 2610 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0450] of Schwartz teaches FIG. 26A shows an example of a scene that may be captured and analyzed during navigation of a host vehicle 2610 (which may comprise host vehicle 200); specifically, the scene shown in FIG. 26A is an example of one of the images that may be captured at time t from an environment of the host vehicle 2610; the navigation system may include at least one processing device that is specifically programmed to receive the plurality of images and analyze the images to determine a navigational action in response to the scene; specifically, the at least one processing device may implement pedestrian identification module 2502 to identify pedestrian 2620, and eye identification module 2504 may identify eves of pedestrian 2620 and determine that pedestrian 2620 is looking towards host vehicle 2610; as explained above, eye identification module 2504 may make this determination by constructing a cone apexed at host vehicle 2610 or at pedestrian 2620 and determining that the looking direction of pedestrian 2620 intersects the cone at an angle less than a threshold or is within the cone, respectively; it will be appreciated that a cone is used here as an example of a geometric shape that can be used to translate the looking direction of the pedestrian into a geometric shape; it will be further appreciated that other 3-D shapes in real space can be used; furthermore, 2-D shapes may be used in image space instead of or in combination with the 3-D shapes in real space; for example, a triangle may be used in some embodiments; navigation action module 2506 may then determine a navigation action for host vehicle, such as deceleration, or braking, or switching lanes; and in some embodiments, the host vehicle may switch lanes and/or may decelerate to a stop, and See also at least ABSTRACT and paragraphs[0097], [0166], [0121], [0451]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches identifying a pedestrian captured in an image that includes an environment of a host vehicle)); acquiring eye trajectory data of the pedestrian (FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0455] of Schwartz teaches at step 2708, processing unit 110 may determine, based on analysis of the at least one of the plurality of images and based on the identification of the eyes of the at least one pedestrian in the at least one of the plurality of images, a looking direction of the at least one pedestrian; for example, based on characteristics of features associated with or near the eye, processing unit 110 may determine a looking direction; for example, processing unit 110 may identify a pupil and determine, based on this identification, that the pedestrian is looking towards the host vehicle; and additionally or alternatively, processing unit 110 may identify the back of a head and determine, based on this identification, that the pedestrian is looking away from the host vehicle, and See also at least ABSTRACT and paragraphs[0005]-[0006], [0008]-[0009], [0097], [0112], [0121], [0447], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle)); determining a change in the eye trajectory data of the pedestrian; to determine whether or not the eye trajectory of the pedestrian moves toward the direction that is; the travel direction of the vehicle (2708 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0455] of Schwartz teaches at step 2708, processing unit 110 may determine, based on analysis of the at least one of the plurality of images and based on the identification of the eyes of the at least one pedestrian in the at least one of the plurality of images, a looking direction of the at least one pedestrian; for example, based on characteristics of features associated with or near the eye, processing unit 110 may determine a looking direction; for example, processing unit 110 may identify a pupil and determine, based on this identification, that the pedestrian is looking towards the host vehicle; and additionally or alternatively, processing unit 110 may identify the back of a head and determine, based on this identification, that the pedestrian is looking away from the host vehicle, and See also at least ABSTRACT and paragraphs[0097], [0121], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle, or determines a looking direction of the pedestrian is away from the host vehicle)); and responsive to determining that the eye trajectory of the pedestrian moves toward the direction that is; the travel direction of the vehicle, causing (2712 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0459] of Schwartz teaches at step 2712, processing unit 110 may cause control of at least one navigational actuator of the host vehicle in accordance with the determined first or second navigational action for the host vehicle; the navigational actuator may include at least one of a steering mechanism, a brake, or an accelerator; and processing unit 110 may cause one or more signals to be transmitted to the navigational actuator to trigger a navigational action, and See also at least ABSTRACT and paragraphs[0450]-[0458], and [0460]-[0461] of Schwartz (i.e., Schwartz teaches a processing unit that determines a looking direction of a pedestrian is towards a host vehicle)); but do not expressly teach comparing the change in the eye trajectory data to a travel direction of the vehicle; opposite; opposite; a perception of the vehicle by the pedestrian to be modified. However, Kitayama teaches comparing the change in the eye trajectory data to a travel direction of the vehicle (FIGS. 13, paragraph[0115] of Kitayama teaches however, the position of a vehicle that is traveling on the roadway may be detected, and whether or not the gaze direction of the target pedestrian is facing the direction of the vehicle may be determined; and then, when the gaze direction of the target pedestrian also moves in accompaniment with the position of the vehicle moving, the target pedestrian may be determined to have performed the safety confirmation regarding vehicles, and See also at least ABSTRACT and paragraphs[0111]-[0114] of Kitayama (i.e., Kitayama teaches analyzing a gaze direction of a pedestrian with respect to a direction of a traveling vehicle)); but the combination of Schwartz and Kitayama do not expressly teach opposite; opposite; a perception of the vehicle by the pedestrian to be modified. However, Chan teaches opposite; opposite; a perception of the vehicle by the pedestrian to be modified (FIGS. 1, 3, and 7-8, paragraph[0099] of Chan teaches the likelihood for the pedestrian to recognize the vehicle 100 may vary depending on what the pedestrian is currently doing; for example, although the pedestrian’s viewing direction is opposite to the driving direction of the vehicle 100, if the pedestrian is looking straight at her cell phone, the pedestrian may be less likely to recognize the vehicle 100; and as another example, although the pedestrian's viewing direction is identical to the driving direction of the vehicle 100, if the pedestrian tries to hail a cab, the pedestrian may be highly likely to recognize the vehicle 100, and See also at least ABSTRACT and paragraphs[0012], [0049]-[0050], [0089]-[0098], and [0100]-[0146] of Chan (i.e., Chan teaches a pedestrian wearing a headset and having viewing direction that is opposite to a driving direction of a vehicle, wherein a sensor module of the vehicle is capable of detecting the pedestrian’s behavior and a processor in the vehicle determines a first recognition value based on the behavior, and wherein a warning message is transmitted to the headset based on the pedestrian’s behavior via the processor and a communication module of the vehicle)), but the combination of Schwartz and Chan still do not expressly teach comparing the change in the eye trajectory data to a travel direction of the vehicle. Furthermore, Schwartz, Kitayama, and Chan are considered to be analogous art because they are from the same field of endeavor with respect to a vehicle, and involve the same problem of suitably, which includes safely, operating the vehicle based on a pedestrian. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the method and system of Schwartz based on Kitayama and Chan for comparing the change in the eye trajectory data to a travel direction of the vehicle to determine whether or not the eye trajectory of the pedestrian moves toward the direction that is opposite the travel direction of the vehicle; responsive to determining that the eye trajectory of the pedestrian moves toward the direction opposite the travel direction of the vehicle, causing a perception of the vehicle by the pedestrian to be modified. One reason for the modification as taught by Kitayama is to suitably detect a pedestrian who is present in the periphery of a vehicle (paragraph[0002] of Kitayama). Another reason for the modification as taught by Chan is to suitably control a vehicle and output a warning signal depending on behaviors of pedestrians near the vehicle (ABSTRACT, paragraphs[0002], and [0009]-[0010] of Chan). The same motivation and rationale to combine for claim 1 mentioned above, in light of corresponding statement of grounds of rejection, applies to all corresponding dependent claims mentioned in the corresponding statement of grounds of rejection. Regarding claim 2, Schwartz, Kitayama, and Chan teach the method of claim 1, wherein causing a perception of the vehicle by the pedestrian to be modified includes causing a behavior of the vehicle to be modified (FIGS. 1, 3, and 7-9, paragraph[0118] of Chan teaches if the first recognition value is determined, the vehicle 100 may output a warning signal (S300); and specifically, the processor 110 may control an output device, e.g., the communication module 150, the lamp 160, the horn 170, or the spraying device 180, to output a warning signal, and See also at least ABSTRACT and paragraphs[0012], [0016], [0043], [0049]-[0050], [0089]-[0117], and [0119]-[0145] of Chan (i.e., Chan teaches a pedestrian wearing a headset and having viewing direction that is opposite to a driving direction of a vehicle, wherein a sensor module of the vehicle is capable of detecting the pedestrian’s behavior and a processor in the vehicle determines a first recognition value based on the behavior, wherein a warning message is transmitted to the headset based on the pedestrian’s behavior via the processor and a communication module of the vehicle, and wherein if the first recognition value is determined then the vehicle is capable of controlling an output device (e.g., the communication module, a lamp, a horn, or a spraying device to output a warning signal thereby warning the pedestrian of danger))). Regarding claim 3, Schwartz, Kitayama, and Chan teach the method of claim 2, wherein causing a behavior of the vehicle to be modified includes decreasing a speed of the vehicle (FIGS. 1, 3, and 7-9, paragraphs[0143]-[0144] of Chan teaches in particular, if the second recognition value is less than the control reference value, the processor 110 may control the driving module 140 to reduce the speed of the vehicle 100; meanwhile, where the vehicle 100 is an AV 100, the processor 110 may control the driving module 140 via an obstacle avoidance algorithm, allowing the vehicle 100 to drive around the pedestrian; regarding speed control, the processor 110 may control the speed of the vehicle 100 to be proportional to the second recognition value; and in other words, the processor 110 may control the speed of the vehicle 100 to be lower as the second recognition value decreases and to be higher as the second recognition value increases, and See also at least ABSTRACT and paragraphs[0012], [0016], [0043], [0049]-[0050], [0071]-[0072], [0089]-[0142], and [0145] of Chan (i.e., Chan teaches reducing speed of the vehicle when the second recognition value is less than a control reference value)). Regarding claim 6, Schwartz, Kitayama, and Chan teach the method of claim 1, wherein acquiring eye trajectory data of the pedestrian is performed by one or more pedestrian sensors carried on the vehicle (122, 124, 126 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0112] of Schwartz teaches one or more cameras (e.g., image capture devices 122, 124, and 126) may be part of a sensing block included on a vehicle; various other sensors may be included in the sensing block, and any or all of the sensors may be relied upon to develop a sensed navigational state of the vehicle; in addition to cameras (forward, sideward, rearward, etc.), other sensors such as RADAR, LIDAR, and acoustic sensors may be included in the sensing block; additionally, the sensing block may include one or more components configured to communicate and transmit/receive information relating to the environment of the vehicle; for example, such components may include wireless transceivers (RF, etc.) that may receive from a source remotely located with respect to the host vehicle sensor-based information or any other type of information relating to the environment of the host vehicle; such information may include sensor output information, or related information, received from vehicle systems other than the host vehicle; in some embodiments, such information may include information received from a remote computing device, a centralized server, etc; and furthermore, the cameras may take on many different configurations: single camera units, multiple cameras, camera clusters, long FOV, short FOV, wide angle, fisheye, etc, and See also at least ABSTRACT and paragraphs[0005]-[0006], [0008]-[0009], [0097], [0108]-[0109], [0111], [0121], [0157], [0437]-[0444], [0447], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle by utilizing image capture devices included on the vehicle, wherein the image capture devices acquire images for input to an image processor of the processing unit)). Regarding claim 7, Schwartz, Kitayama, and Chan teach the method of claim 1, wherein acquiring eye trajectory data of the pedestrian is performed by one or more infrastructure devices (122, 124, 126 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0112] of Schwartz teaches one or more cameras (e.g., image capture devices 122, 124, and 126) may be part of a sensing block included on a vehicle; various other sensors may be included in the sensing block, and any or all of the sensors may be relied upon to develop a sensed navigational state of the vehicle; in addition to cameras (forward, sideward, rearward, etc.), other sensors such as RADAR, LIDAR, and acoustic sensors may be included in the sensing block; additionally, the sensing block may include one or more components configured to communicate and transmit/receive information relating to the environment of the vehicle; for example, such components may include wireless transceivers (RF, etc.) that may receive from a source remotely located with respect to the host vehicle sensor-based information or any other type of information relating to the environment of the host vehicle; such information may include sensor output information, or related information, received from vehicle systems other than the host vehicle; in some embodiments, such information may include information received from a remote computing device, a centralized server, etc; and furthermore, the cameras may take on many different configurations: single camera units, multiple cameras, camera clusters, long FOV, short FOV, wide angle, fisheye, etc, and See also at least ABSTRACT and paragraphs[0005]-[0006], [0008]-[0009], [0097], [0108]-[0109], [0111], [0121], [0157], [0437]-[0444], [0447], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle by utilizing image capture devices included on the vehicle, wherein the image capture devices acquire images for input to an image processor of the processing unit, and wherein at least one image capture device is connected to the processing unit)). Regarding claim 8, Schwartz, Kitayama, and Chan teach the method of claim 1, wherein acquiring eye trajectory data of the pedestrian is performed by receiving eye trajectory data from a mobile device of the pedestrian (FIGS. 1, 3, and 7-8, paragraph[0104] of Chan teaches referring back to FIG. 8, the sensor module 130 may identify pedestrian A adjacent to the driving road, and the processor 110 may determine that the first recognition value of pedestrian A is 1 depending on the walking direction of pedestrian A; subsequently, the sensor module 130 may identify that pedestrian A is using a cell phone; and the memory 120 may previously store a recognition correction value, e.g., 0.6, corresponding to the behavior of using a cell phone, and the processor 110 may determine that the first recognition value of pedestrian A is 0.6 which is a result of multiplying the first recognition value, 1, by 0.6, by referring to the memory 120, and See also at least ABSTRACT and paragraphs[0012], [0049]-[0050], [0089]-[0099], [0100]-[0103], and [0105]-[0145] of Chan (i.e., Chan teaches a pedestrian wearing a headset and having viewing direction that is opposite to a driving direction of a vehicle, wherein a sensor module of the vehicle is capable of detecting the pedestrian’s behavior and a processor in the vehicle determines a first recognition value based on the behavior, wherein a warning message is transmitted to the headset based on the pedestrian’s behavior via the processor and a communication module of the vehicle, and wherein the sensor module is capable of identifying the pedestrian using their cell phone and the pedestrian’s behavior such as looking at their cell phone)). Regarding claim 11, Schwartz teaches a system, comprising: (100 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0097] of Schwartz teaches FIG. 1 is a block diagram representation of a system 100 consistent with the exemplary disclosed embodiments; system 100 may include various components depending on the requirements of a particular implementation; in some embodiments, system 100 may include a processing unit 110, an image acquisition unit 120, a position sensor 130, one or more memory units 140, 150, a map database 160, a user interface 170, and a wireless transceiver 172; processing unit 110 may include one or more processing devices; in some embodiments, processing unit 110 may include an applications processor 180, an image processor 190, or any other suitable processing device; similarly, image acquisition unit 120 may include any number of image acquisition devices and components depending on the requirements of a particular application; in some embodiments, image acquisition unit 120 may include one or more image capture devices (e.g., cameras, CCDs, or any other type of image sensor), such as image capture device 122, image capture device 124, and image capture device 126; system 100 may also include a data interface 128 communicatively connecting processing unit 110 to image acquisition unit 120; and for example, data interface 128 may include any wired and/or wireless link or links for transmitting image data acquired by image acquisition unit 120 to processing unit 110, and See also at least ABSTRACT and paragraphs[0121], [0450]-[0454], and [0456]-[0461] of Schwartz (i.e., Schwartz teaches a system that is capable of being included in a vehicle)) one or more processors programmed to initiate executable operations, the executable operations including: (110 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0455] of Schwartz teaches at step 2708, processing unit 110 may determine, based on analysis of the at least one of the plurality of images and based on the identification of the eyes of the at least one pedestrian in the at least one of the plurality of images, a looking direction of the at least one pedestrian; for example, based on characteristics of features associated with or near the eye, processing unit 110 may determine a looking direction; for example, processing unit 110 may identify a pupil and determine, based on this identification, that the pedestrian is looking towards the host vehicle; and additionally or alternatively, processing unit 110 may identify the back of a head and determine, based on this identification, that the pedestrian is looking away from the host vehicle, and See also at least ABSTRACT and paragraphs[0097], [0113], [0121], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle)) detecting a pedestrian in an environment of a vehicle (2620, 2610 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0450] of Schwartz teaches FIG. 26A shows an example of a scene that may be captured and analyzed during navigation of a host vehicle 2610 (which may comprise host vehicle 200); specifically, the scene shown in FIG. 26A is an example of one of the images that may be captured at time t from an environment of the host vehicle 2610; the navigation system may include at least one processing device that is specifically programmed to receive the plurality of images and analyze the images to determine a navigational action in response to the scene; specifically, the at least one processing device may implement pedestrian identification module 2502 to identify pedestrian 2620, and eye identification module 2504 may identify eves of pedestrian 2620 and determine that pedestrian 2620 is looking towards host vehicle 2610; as explained above, eye identification module 2504 may make this determination by constructing a cone apexed at host vehicle 2610 or at pedestrian 2620 and determining that the looking direction of pedestrian 2620 intersects the cone at an angle less than a threshold or is within the cone, respectively; it will be appreciated that a cone is used here as an example of a geometric shape that can be used to translate the looking direction of the pedestrian into a geometric shape; it will be further appreciated that other 3-D shapes in real space can be used; furthermore, 2-D shapes may be used in image space instead of or in combination with the 3-D shapes in real space; for example, a triangle may be used in some embodiments; navigation action module 2506 may then determine a navigation action for host vehicle, such as deceleration, or braking, or switching lanes; and in some embodiments, the host vehicle may switch lanes and/or may decelerate to a stop, and See also at least ABSTRACT and paragraphs[0097], [0166], [0121], [0451]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches identifying a pedestrian captured in an image that includes an environment of a host vehicle)); acquiring eye trajectory data of the pedestrian (FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0455] of Schwartz teaches at step 2708, processing unit 110 may determine, based on analysis of the at least one of the plurality of images and based on the identification of the eyes of the at least one pedestrian in the at least one of the plurality of images, a looking direction of the at least one pedestrian; for example, based on characteristics of features associated with or near the eye, processing unit 110 may determine a looking direction; for example, processing unit 110 may identify a pupil and determine, based on this identification, that the pedestrian is looking towards the host vehicle; and additionally or alternatively, processing unit 110 may identify the back of a head and determine, based on this identification, that the pedestrian is looking away from the host vehicle, and See also at least ABSTRACT and paragraphs[0005]-[0006], [0008]-[0009], [0097], [0112], [0121], [0447], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle)); determining a change in the eye trajectory data of the pedestrian; to determine whether or not the eye trajectory of the pedestrian moves toward the direction that is; the travel direction of the vehicle (2708 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0455] of Schwartz teaches at step 2708, processing unit 110 may determine, based on analysis of the at least one of the plurality of images and based on the identification of the eyes of the at least one pedestrian in the at least one of the plurality of images, a looking direction of the at least one pedestrian; for example, based on characteristics of features associated with or near the eye, processing unit 110 may determine a looking direction; for example, processing unit 110 may identify a pupil and determine, based on this identification, that the pedestrian is looking towards the host vehicle; and additionally or alternatively, processing unit 110 may identify the back of a head and determine, based on this identification, that the pedestrian is looking away from the host vehicle, and See also at least ABSTRACT and paragraphs[0097], [0121], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle, or determines a looking direction of the pedestrian is away from the host vehicle)); and responsive to determining that the eye trajectory of the pedestrian moves toward the direction that is; the travel direction of the vehicle, causing (2712 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0459] of Schwartz teaches at step 2712, processing unit 110 may cause control of at least one navigational actuator of the host vehicle in accordance with the determined first or second navigational action for the host vehicle; the navigational actuator may include at least one of a steering mechanism, a brake, or an accelerator; and processing unit 110 may cause one or more signals to be transmitted to the navigational actuator to trigger a navigational action, and See also at least ABSTRACT and paragraphs[0450]-[0458], and [0460]-[0461] of Schwartz (i.e., Schwartz teaches a processing unit that determines a looking direction of a pedestrian is towards a host vehicle)); but do not expressly teach comparing the change in the eye trajectory data to a travel direction of the vehicle; opposite; opposite; a perception of the vehicle by the pedestrian to be modified. However, Kitayama teaches comparing the change in the eye trajectory data to a travel direction of the vehicle (FIGS. 13, paragraph[0115] of Kitayama teaches however, the position of a vehicle that is traveling on the roadway may be detected, and whether or not the gaze direction of the target pedestrian is facing the direction of the vehicle may be determined; and then, when the gaze direction of the target pedestrian also moves in accompaniment with the position of the vehicle moving, the target pedestrian may be determined to have performed the safety confirmation regarding vehicles, and See also at least ABSTRACT and paragraphs[0111]-[0114] of Kitayama (i.e., Kitayama teaches analyzing a gaze direction of a pedestrian with respect to a direction of a traveling vehicle)); but the combination of Schwartz and Kitayama do not expressly teach opposite; opposite; a perception of the vehicle by the pedestrian to be modified. However, Chan teaches opposite; opposite; a perception of the vehicle by the pedestrian to be modified (FIGS. 1, 3, and 7-8, paragraph[0099] of Chan teaches the likelihood for the pedestrian to recognize the vehicle 100 may vary depending on what the pedestrian is currently doing; for example, although the pedestrian’s viewing direction is opposite to the driving direction of the vehicle 100, if the pedestrian is looking straight at her cell phone, the pedestrian may be less likely to recognize the vehicle 100; and as another example, although the pedestrian's viewing direction is identical to the driving direction of the vehicle 100, if the pedestrian tries to hail a cab, the pedestrian may be highly likely to recognize the vehicle 100, and See also at least ABSTRACT and paragraphs[0012], [0049]-[0050], [0089]-[0098], and [0100]-[0146] of Chan (i.e., Chan teaches a pedestrian wearing a headset and having viewing direction that is opposite to a driving direction of a vehicle, wherein a sensor module of the vehicle is capable of detecting the pedestrian’s behavior and a processor in the vehicle determines a first recognition value based on the behavior, and wherein a warning message is transmitted to the headset based on the pedestrian’s behavior via the processor and a communication module of the vehicle)), but the combination of Schwartz and Chan still do not expressly teach comparing the change in the eye trajectory data to a travel direction of the vehicle. Furthermore, Schwartz, Kitayama, and Chan are considered to be analogous art because they are from the same field of endeavor with respect to a vehicle, and involve the same problem of suitably, which includes safely, operating the vehicle based on a pedestrian. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the method and system of Schwartz based on Kitayama and Chan for comparing the change in the eye trajectory data to a travel direction of the vehicle to determine whether or not the eye trajectory of the pedestrian moves toward the direction that is opposite the travel direction of the vehicle; responsive to determining that the eye trajectory of the pedestrian moves toward the direction opposite the travel direction of the vehicle, causing a perception of the vehicle by the pedestrian to be modified. One reason for the modification as taught by Kitayama is to suitably detect a pedestrian who is present in the periphery of a vehicle (paragraph[0002] of Kitayama). Another reason for the modification as taught by Chan is to suitably control a vehicle and output a warning signal depending on behaviors of pedestrians near the vehicle (ABSTRACT, paragraphs[0002], and [0009]-[0010] of Chan). The same motivation and rationale to combine for claim 1 mentioned above, in light of corresponding statement of grounds of rejection, applies to all corresponding dependent claims mentioned in the corresponding statement of grounds of rejection. Regarding claim 12, Schwartz, Kitayama, and Chan teach the system of claim 11, wherein the one or more processors are carried on the vehicle (110, 2700, 2708 FIGS. 1 and 27, paragraph[0455] of Schwartz teaches at step 2708, processing unit 110 may determine, based on analysis of the at least one of the plurality of images and based on the identification of the eyes of the at least one pedestrian in the at least one of the plurality of images, a looking direction of the at least one pedestrian; for example, based on characteristics of features associated with or near the eye, processing unit 110 may determine a looking direction; for example, processing unit 110 may identify a pupil and determine, based on this identification, that the pedestrian is looking towards the host vehicle; and additionally or alternatively, processing unit 110 may identify the back of a head and determine, based on this identification, that the pedestrian is looking away from the host vehicle, and See also at least ABSTRACT and paragraphs[0097], [0113], [0121], [0452]-[0454], and [0456]-[0461] of Schwartz (i.e., Schwartz teaches the processing unit that determines that the looking direction of the pedestrian is towards the host vehicle, wherein the vehicle is equipped with the processing unit)). Regarding claim 13, Schwartz, Kitayama, and Chan teach the system of claim 11, wherein causing a perception of the vehicle by the pedestrian to be modified includes causing a behavior of the vehicle to be modified (FIGS. 1, 3, and 7-9, paragraph[0118] of Chan teaches if the first recognition value is determined, the vehicle 100 may output a warning signal (S300); and specifically, the processor 110 may control an output device, e.g., the communication module 150, the lamp 160, the horn 170, or the spraying device 180, to output a warning signal, and See also at least ABSTRACT and paragraphs[0012], [0016], [0043], [0049]-[0050], [0089]-[0117], and [0119]-[0145] of Chan (i.e., Chan teaches a pedestrian wearing a headset and having viewing direction that is opposite to a driving direction of a vehicle, wherein a sensor module of the vehicle is capable of detecting the pedestrian’s behavior and a processor in the vehicle determines a first recognition value based on the behavior, wherein a warning message is transmitted to the headset based on the pedestrian’s behavior via the processor and a communication module of the vehicle, and wherein if the first recognition value is determined then the vehicle is capable of controlling an output device (e.g., the communication module, a lamp, a horn, or a spraying device to output a warning signal thereby warning the pedestrian of danger))). Regarding claim 14, Schwartz, Kitayama, and Chan teach the system of claim 13, wherein causing a behavior of the vehicle to be modified includes decreasing a speed of the vehicle (FIGS. 1, 3, and 7-9, paragraphs[0143]-[0144] of Chan teaches in particular, if the second recognition value is less than the control reference value, the processor 110 may control the driving module 140 to reduce the speed of the vehicle 100; meanwhile, where the vehicle 100 is an AV 100, the processor 110 may control the driving module 140 via an obstacle avoidance algorithm, allowing the vehicle 100 to drive around the pedestrian; regarding speed control, the processor 110 may control the speed of the vehicle 100 to be proportional to the second recognition value; and in other words, the processor 110 may control the speed of the vehicle 100 to be lower as the second recognition value decreases and to be higher as the second recognition value increases, and See also at least ABSTRACT and paragraphs[0012], [0016], [0043], [0049]-[0050], [0071]-[0072], [0089]-[0142], and [0145] of Chan (i.e., Chan teaches reducing speed of the vehicle when the second recognition value is less than a control reference value)). Regarding claim 17, Schwartz, Kitayama, and Chan teach the system of claim 11, further including one or more pedestrian sensors carried on the vehicle, wherein the one or more pedestrian sensors being are operatively connected to the one or more processors, and wherein acquiring eye trajectory data of the pedestrian is performed by one or more pedestrian sensors (122, 124, 126 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0112] of Schwartz teaches one or more cameras (e.g., image capture devices 122, 124, and 126) may be part of a sensing block included on a vehicle; various other sensors may be included in the sensing block, and any or all of the sensors may be relied upon to develop a sensed navigational state of the vehicle; in addition to cameras (forward, sideward, rearward, etc.), other sensors such as RADAR, LIDAR, and acoustic sensors may be included in the sensing block; additionally, the sensing block may include one or more components configured to communicate and transmit/receive information relating to the environment of the vehicle; for example, such components may include wireless transceivers (RF, etc.) that may receive from a source remotely located with respect to the host vehicle sensor-based information or any other type of information relating to the environment of the host vehicle; such information may include sensor output information, or related information, received from vehicle systems other than the host vehicle; in some embodiments, such information may include information received from a remote computing device, a centralized server, etc; and furthermore, the cameras may take on many different configurations: single camera units, multiple cameras, camera clusters, long FOV, short FOV, wide angle, fisheye, etc, and See also at least ABSTRACT and paragraphs[0005]-[0006], [0008]-[0009], [0097], [0108]-[0109], [0111], [0121], [0157], [0437]-[0444], [0447], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle by utilizing image capture devices included on the vehicle, wherein the image capture devices acquire images for input to an image processor of the processing unit)). Regarding claim 18, Schwartz, Kitayama, and Chan teach the system of claim 11, wherein acquiring eye trajectory data of the pedestrian includes receiving the eye trajectory data from one or more infrastructure devices, and wherein the one or more infrastructure devices are operatively connected to the one or more processors (122, 124, 126 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0112] of Schwartz teaches one or more cameras (e.g., image capture devices 122, 124, and 126) may be part of a sensing block included on a vehicle; various other sensors may be included in the sensing block, and any or all of the sensors may be relied upon to develop a sensed navigational state of the vehicle; in addition to cameras (forward, sideward, rearward, etc.), other sensors such as RADAR, LIDAR, and acoustic sensors may be included in the sensing block; additionally, the sensing block may include one or more components configured to communicate and transmit/receive information relating to the environment of the vehicle; for example, such components may include wireless transceivers (RF, etc.) that may receive from a source remotely located with respect to the host vehicle sensor-based information or any other type of information relating to the environment of the host vehicle; such information may include sensor output information, or related information, received from vehicle systems other than the host vehicle; in some embodiments, such information may include information received from a remote computing device, a centralized server, etc; and furthermore, the cameras may take on many different configurations: single camera units, multiple cameras, camera clusters, long FOV, short FOV, wide angle, fisheye, etc, and See also at least ABSTRACT and paragraphs[0005]-[0006], [0008]-[0009], [0097], [0108]-[0109], [0111], [0121], [0157], [0437]-[0444], [0447], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle by utilizing image capture devices included on the vehicle, wherein the image capture devices acquire images for input to an image processor of the processing unit, and wherein at least one image capture device is connected to the processing unit)). Regarding claim 19, Schwartz, Kitayama, and Chan teach the system of claim 11, wherein acquiring eye trajectory data of the pedestrian includes receiving the eye trajectory data from a mobile device of a pedestrian, and wherein the mobile device is operatively connected to the one or more processors (FIGS. 1, 3, and 7-8, paragraph[0104] of Chan teaches referring back to FIG. 8, the sensor module 130 may identify pedestrian A adjacent to the driving road, and the processor 110 may determine that the first recognition value of pedestrian A is 1 depending on the walking direction of pedestrian A; subsequently, the sensor module 130 may identify that pedestrian A is using a cell phone; and the memory 120 may previously store a recognition correction value, e.g., 0.6, corresponding to the behavior of using a cell phone, and the processor 110 may determine that the first recognition value of pedestrian A is 0.6 which is a result of multiplying the first recognition value, 1, by 0.6, by referring to the memory 120, and See also at least ABSTRACT and paragraphs[0012], [0016], [0043], [0049]-[0050], [0071]-[0072], [0085]-[0145] of Chan (i.e., Chan teaches a pedestrian wearing a headset and having a viewing direction that is opposite to a driving direction of a vehicle, wherein a sensor module of the vehicle is capable of detecting the pedestrian’s behavior and a processor in the vehicle determines a first recognition value based on the behavior, wherein a warning message is transmitted to the headset based on the pedestrian’s behavior via the processor and a communication module of the vehicle, wherein the sensor module is capable of identifying the pedestrian using their cell phone and the pedestrian’s behavior such as looking at their cell phone, and wherein vehicle speed is reduced when the second recognition value is less than a control reference value base on a pedestrian’s behavior including the viewing direction while wearing a headset)). Regarding claim 22, Schwartz teaches further including: responsive to determining that the eye trajectory of the pedestrian does not move toward the direction that is opposite the travel direction of the vehicle, (2708 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0455] of Schwartz teaches at step 2708, processing unit 110 may determine, based on analysis of the at least one of the plurality of images and based on the identification of the eyes of the at least one pedestrian in the at least one of the plurality of images, a looking direction of the at least one pedestrian; for example, based on characteristics of features associated with or near the eye, processing unit 110 may determine a looking direction; for example, processing unit 110 may identify a pupil and determine, based on this identification, that the pedestrian is looking towards the host vehicle; and additionally or alternatively, processing unit 110 may identify the back of a head and determine, based on this identification, that the pedestrian is looking away from the host vehicle, and See also at least ABSTRACT and paragraphs[0097], [0121], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle, or determines a looking direction of the pedestrian is away from the host vehicle)); but does not expressly teach taking no action regarding modifying the perception of the vehicle by the pedestrian. However, Chan teaches taking no action regarding modifying the perception of the vehicle by the pedestrian (FIGS. 1, 3, and 7-8, paragraph[0099] of Chan teaches the likelihood for the pedestrian to recognize the vehicle 100 may vary depending on what the pedestrian is currently doing; for example, although the pedestrian’s viewing direction is opposite to the driving direction of the vehicle 100, if the pedestrian is looking straight at her cell phone, the pedestrian may be less likely to recognize the vehicle 100; and as another example, although the pedestrian's viewing direction is identical to the driving direction of the vehicle 100, if the pedestrian tries to hail a cab, the pedestrian may be highly likely to recognize the vehicle 100, and See also at least ABSTRACT and paragraphs[0012], [0049]-[0050], [0072]-[0073], [0089]-[0098], and [0100]-[0145] of Chan (i.e., Chan teaches a pedestrian wearing a headset and having viewing direction that is opposite to a driving direction of a vehicle, wherein a sensor module of the vehicle is capable of detecting the pedestrian’s behavior and a processor in the vehicle determines a first recognition value based on the behavior, and wherein, when a recognition value is less than a warning reference value that indicates a likelihood to recognize a vehicle, a warning message is transmitted to the headset based on the pedestrian’s behavior via the processor and a communication module of the vehicle)). Regarding claim 23, Schwartz teaches wherein the executable operations further include: responsive to determining that the eye trajectory of the pedestrian does not move toward the direction that is opposite the travel direction of the vehicle, (2708 FIGS. 1-2A, 4, 5A-5D, and 26A-27, paragraph[0455] of Schwartz teaches at step 2708, processing unit 110 may determine, based on analysis of the at least one of the plurality of images and based on the identification of the eyes of the at least one pedestrian in the at least one of the plurality of images, a looking direction of the at least one pedestrian; for example, based on characteristics of features associated with or near the eye, processing unit 110 may determine a looking direction; for example, processing unit 110 may identify a pupil and determine, based on this identification, that the pedestrian is looking towards the host vehicle; and additionally or alternatively, processing unit 110 may identify the back of a head and determine, based on this identification, that the pedestrian is looking away from the host vehicle, and See also at least ABSTRACT and paragraphs[0097], [0121], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz (i.e., Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle, or determines a looking direction of the pedestrian is away from the host vehicle)); but does not expressly teach taking no action regarding modifying the perception of the vehicle by the pedestrian. However, Chan teaches taking no action regarding modifying the perception of the vehicle by the pedestrian (FIGS. 1, 3, and 7-8, paragraph[0099] of Chan teaches the likelihood for the pedestrian to recognize the vehicle 100 may vary depending on what the pedestrian is currently doing; for example, although the pedestrian’s viewing direction is opposite to the driving direction of the vehicle 100, if the pedestrian is looking straight at her cell phone, the pedestrian may be less likely to recognize the vehicle 100; and as another example, although the pedestrian's viewing direction is identical to the driving direction of the vehicle 100, if the pedestrian tries to hail a cab, the pedestrian may be highly likely to recognize the vehicle 100, and See also at least ABSTRACT and paragraphs[0012], [0049]-[0050], [0072]-[0073], [0089]-[0098], and [0100]-[0145] of Chan (i.e., Chan teaches a pedestrian wearing a headset and having viewing direction that is opposite to a driving direction of a vehicle, wherein a sensor module of the vehicle is capable of detecting the pedestrian’s behavior and a processor in the vehicle determines a first recognition value based on the behavior, and wherein, when a recognition value is less than a warning reference value that indicates a likelihood to recognize a vehicle, a warning message is transmitted to the headset based on the pedestrian’s behavior via the processor and a communication module of the vehicle)). Claims 4 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Schwartz, in view of Kitayama, Chan, and Rice et al., U.S. Patent Application Publication 2019/0080602 A1 (hereinafter Rice). Regarding claim 4, Schwartz, Kitayama, and Chan teach the method of claim 1, further including:;, causing the perception of the vehicle by the pedestrian to be modified includes causing a degree of modification to be increased (FIGS. 1, 3, and 7-9, paragraph[0134] of Chan teaches referring to FIG. 9, when the pedestrian's first recognition value is less than the warning reference value, the processor 110 may output a warning sound, warning message, water, or air via the output device; in this case, a priority may be preset on each warning signal, and the warning sound may have the highest priority; and thus, the processor 110 may control the horn 170 to output the warning sound, and See also at least ABSTRACT and paragraphs[0012], [0016], [0043], [0049]-[0050], [0071]-[0072], [0089]-[0133], and [0135]-[0145] of Chan (i.e., Chan teaches at least enhancing the warning signal based on priority via a warning sound, warning message, water or air)); but do not expressly teach determining whether the vehicle is located in a rural environment; and when the vehicle is determined to be located in a rural environment. However, Rice teaches determining whether the vehicle is located in a rural environment; and when the vehicle is determined to be located in a rural environment (FIGS. 1-2, paragraph[0065] of Rice teaches in some implementations, the vehicle computing system 102 can determine a current modification for a system onboard the vehicle 104 based at least in part on the current operating conditions of the vehicle 104; by way of example, the vehicle computing system 102 can determine that the vehicle’s current speed is decreasing and/or below a threshold and that the vehicle 104 is travelling in a geographic area with a low crowd density (e.g., in a rural environment without pedestrians); the vehicle computing system 102 can determine that the vehicle 104 can safely operate below the maximum capability of the sensor system 111 based at least in part on such vehicle parameter(s) 140; accordingly, the vehicle computing system 102 can determine a modification to one or more operating characteristics of the sensor system 111; the operating characteristic(s) of the sensor system 111 can include, for example, data acquisition characteristics (e.g., associated with the collection of the sensor data 118 by a sensor 112) and/or data processing characteristics (e.g., associated with the processing of the sensor data 118); for instance, the vehicle computing system 102 can determine (e.g., using the rule(s)-based algorithm, model 202, etc.) that one or more of the sensors 118 (e.g., cameras) should be modified to utilize a lower frame rate and/or acquire sensor data 118 a lower rate; and this can allow the sensor system 111 to consume less power for image acquisition/processing and, ultimately, to generate less heat, and See also at least ABSTRACT and paragraphs[0021], [0026], [0034], and [0058] of Rice (i.e., Rice teaches a vehicle computing system that is capable of determining a vehicle is traveling in a geographic area with a low crowd density)). Furthermore, Schwartz, Kitayama, Chan, and Rice are considered to be analogous art because they are from the same field of endeavor with respect to a vehicle, and involve the same problem of suitably, which includes safely, operating the vehicle based on a pedestrian. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the method and system of Schwartz based on Kitayama, Chan, and Rice for determining whether the vehicle is located in a rural environment; and when the vehicle is determined to be located in a rural environment. One reason for the modification as taught by Kitayama is to suitably detect a pedestrian who is present in the periphery of a vehicle (paragraph[0002] of Kitayama). Another reason for the modification as taught by Chan is to suitably control a vehicle and output a warning signal depending on behaviors of pedestrians near the vehicle (ABSTRACT, paragraphs[0002], and [0009]-[0010] of Chan). Still another reason for the modification as taught by Rice is to suitably determine that a vehicle can safely operate below maximum capability of a sensor system of the vehicle (paragraphs[0026] and [0065] of Rice). The same motivation and rationale to combine for claim 4 mentioned above, in light of corresponding statement of grounds of rejection, applies to all claims mentioned in the corresponding statement of grounds of rejection. Regarding claim 15, Schwartz, Kitayama, and Chan teach the system of claim 11, wherein the executable operations further include:;, causing the perception of the vehicle by the pedestrian to be modified includes causing a degree of modification to be increased (FIGS. 1, 3, and 7-9, paragraph[0134] of Chan teaches referring to FIG. 9, when the pedestrian's first recognition value is less than the warning reference value, the processor 110 may output a warning sound, warning message, water, or air via the output device; in this case, a priority may be preset on each warning signal, and the warning sound may have the highest priority; and thus, the processor 110 may control the horn 170 to output the warning sound, and See also at least ABSTRACT and paragraphs[0012], [0016], [0043], [0049]-[0050], [0071]-[0072], [0089]-[0133], [0135]-[0145], [0452]-[0454], and [0456]-[0461] of Chan (i.e., Chan teaches at least enhancing the warning signal based on priority via a warning sound, warning message, water or air)); but do not expressly teach determining whether the vehicle is located in a rural environment; and when the vehicle is determined to be located in a rural environment. However, Rice teaches determining whether the vehicle is located in a rural environment; and when the vehicle is determined to be located in a rural environment (FIGS. 1-2, paragraph[0065] of Rice teaches in some implementations, the vehicle computing system 102 can determine a current modification for a system onboard the vehicle 104 based at least in part on the current operating conditions of the vehicle 104; by way of example, the vehicle computing system 102 can determine that the vehicle’s current speed is decreasing and/or below a threshold and that the vehicle 104 is travelling in a geographic area with a low crowd density (e.g., in a rural environment without pedestrians); the vehicle computing system 102 can determine that the vehicle 104 can safely operate below the maximum capability of the sensor system 111 based at least in part on such vehicle parameter(s) 140; accordingly, the vehicle computing system 102 can determine a modification to one or more operating characteristics of the sensor system 111; the operating characteristic(s) of the sensor system 111 can include, for example, data acquisition characteristics (e.g., associated with the collection of the sensor data 118 by a sensor 112) and/or data processing characteristics (e.g., associated with the processing of the sensor data 118); for instance, the vehicle computing system 102 can determine (e.g., using the rule(s)-based algorithm, model 202, etc.) that one or more of the sensors 118 (e.g., cameras) should be modified to utilize a lower frame rate and/or acquire sensor data 118 a lower rate; and this can allow the sensor system 111 to consume less power for image acquisition/processing and, ultimately, to generate less heat, and See also at least ABSTRACT and paragraphs[0021], [0026], [0034], and [0058] of Rice (i.e., Rice teaches a vehicle computing system that is capable of determining a vehicle is traveling in a geographic area with a low crowd density)). Claims 9-10, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Schwartz, in view of Kitayama, Chan, and Sendai et al., U.S. Patent Application Publication 2016/0209916 A1 (hereinafter Sendai). Regarding claim 9, Schwartz, Kitayama, and Chan teach the method of claim 1, wherein causing a perception of the vehicle by the pedestrian to be modified includes causing (FIGS. 1, 3, and 7-8, paragraph[0099] of Chan teaches the likelihood for the pedestrian to recognize the vehicle 100 may vary depending on what the pedestrian is currently doing; for example, although the pedestrian’s viewing direction is opposite to the driving direction of the vehicle 100, if the pedestrian is looking straight at her cell phone, the pedestrian may be less likely to recognize the vehicle 100; and as another example, although the pedestrian's viewing direction is identical to the driving direction of the vehicle 100, if the pedestrian tries to hail a cab, the pedestrian may be highly likely to recognize the vehicle 100, and See also at least ABSTRACT and paragraphs[0012], [0049]-[0050], [0089]-[0098], and [0100]-[0145] of Chan (i.e., Chan teaches a pedestrian wearing a headset and having viewing direction that is opposite to a driving direction of a vehicle, wherein a sensor module of the vehicle is capable of detecting the pedestrian’s behavior and a processor in the vehicle determines a first recognition value based on the behavior, and wherein a warning message is transmitted to the headset based on the pedestrian’s behavior via the processor and a communication module of the vehicle)); but do not expressly teach a size of an extended reality representation of the vehicle to be modified in an extended reality display device carried or worn by the pedestrian. However, Sendai teaches a size of an extended reality representation of the vehicle to be modified in an extended reality display device carried or worn by the pedestrian (FIGS. 5 and 14, paragraph[0166] of Sendai teaches in an initial state (i.e., immediately after the start of the procedure a42), the guiding section 144 displays the virtual object VO31 having a default size in a default position; thereafter, the guiding section 144 changes at least any one of the position, the size, and the shape of the virtual object VO31 to follow the motion of, for example, the part of the body of the user in the procedure a42; the default position can be set as, for example, a predetermined position (the center, etc.) on a screen or a position overlapping any real object on the screen; in the example shown in FIG. 14, the guiding section 144 changes the position and the size of the virtual object VO31 to follow the motion of the right hand RH and the left hand LH of the user; for example, if a circle formed by the right hand RH and the left hand LH gradually decreases (increases) in size, the guiding section 144 gradually reduces (increases) the diameter of the virtual object VO31; and further, if the right hand RH and the left hand RH move upward (downward), the guiding section 144 moves the virtual object VO31 upward (downward) to follow the movement of the right hand RH and the left hand LH, and See also at least ABSTRACT and paragraphs[0027]-[0028], [0037], [0061]-[0062], [0086], [0122]-[0130], [0165], and [0223] of Sendai (i.e., Sendai teaches changing, on a head mounted display worn by a user, a size (i.e., increase or decrease the size) of a virtual object such as a car)). Furthermore, Schwartz, Kitayama, Chan, and Sendai are considered to be analogous art because they are from the same field of endeavor with respect to a vehicle, and involve the same problem of suitably, which includes safely, operating the vehicle based on a pedestrian. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the method and system of Schwartz based on Kitayama, Chan, and Sendai to have a size of an extended reality representation of the vehicle to be modified in an extended reality display device carried or worn by the pedestrian. One reason for the modification as taught by Kitayama is to suitably detect a pedestrian who is present in the periphery of a vehicle (paragraph[0002] of Kitayama). Another reason for the modification as taught by Chan is to suitably control a vehicle and output a warning signal depending on behaviors of pedestrians near the vehicle (ABSTRACT, paragraphs[0002], and [0009]-[0010] of Chan). Still another reason for the modification as taught by Rice is to have a suitable head-mounted display device with which a user can visually recognize a virtual image (ABSTRACT of Sendai). The same motivation and rationale to combine for claim 9 mentioned above, in light of corresponding statement of grounds of rejection, applies to all corresponding dependent claims mentioned in the corresponding statement of grounds of rejection. Regarding claim 10, Schwartz, Kitayama, Chan, and Sendai teach the method of claim 9, wherein causing the size of the extended reality representation of the vehicle to be modified in an extended reality display device carried or worn by the pedestrian includes causing the size of the extended reality representation of the vehicle to be increased in the extended reality display device carried or worn by the pedestrian (FIGS. 5 and 14, paragraph[0166] of Sendai teaches in an initial state (i.e., immediately after the start of the procedure a42), the guiding section 144 displays the virtual object VO31 having a default size in a default position; thereafter, the guiding section 144 changes at least any one of the position, the size, and the shape of the virtual object VO31 to follow the motion of, for example, the part of the body of the user in the procedure a42; the default position can be set as, for example, a predetermined position (the center, etc.) on a screen or a position overlapping any real object on the screen; in the example shown in FIG. 14, the guiding section 144 changes the position and the size of the virtual object VO31 to follow the motion of the right hand RH and the left hand LH of the user; for example, if a circle formed by the right hand RH and the left hand LH gradually decreases (increases) in size, the guiding section 144 gradually reduces (increases) the diameter of the virtual object VO31; and further, if the right hand RH and the left hand RH move upward (downward), the guiding section 144 moves the virtual object VO31 upward (downward) to follow the movement of the right hand RH and the left hand LH, and See also at least ABSTRACT and paragraphs[0027]-[0028], [0037], [0061]-[0062], [0086], [0122]-[0130], [0165], and [0223] of Sendai (i.e., Sendai teaches changing, on a head mounted display worn by a user, a size (i.e., increase or decrease the size) of a virtual object such as a car)). Regarding claim 20, Schwartz, Kitayama, and Chan teach the system of claim 11, wherein causing a perception of the vehicle by the pedestrian to be modified includes causing (FIGS. 1, 3, and 7-8, paragraph[0099] of Chan teaches the likelihood for the pedestrian to recognize the vehicle 100 may vary depending on what the pedestrian is currently doing; for example, although the pedestrian’s viewing direction is opposite to the driving direction of the vehicle 100, if the pedestrian is looking straight at her cell phone, the pedestrian may be less likely to recognize the vehicle 100; and as another example, although the pedestrian's viewing direction is identical to the driving direction of the vehicle 100, if the pedestrian tries to hail a cab, the pedestrian may be highly likely to recognize the vehicle 100, and See also at least ABSTRACT and paragraphs[0012], [0049]-[0050], [0089]-[0098], and [0100]-[0145] of Chan (i.e., Chan teaches a pedestrian wearing a headset and having viewing direction that is opposite to a driving direction of a vehicle, wherein a sensor module of the vehicle is capable of detecting the pedestrian’s behavior and a processor in the vehicle determines a first recognition value based on the behavior, and wherein a warning message is transmitted to the headset based on the pedestrian’s behavior via the processor and a communication module of the vehicle)); but do not expressly teach a size of the vehicle presented in an extended reality device worn or carried by the pedestrian to be modified. However, Sendai teaches a size of the vehicle presented in an extended reality device worn or carried by the pedestrian to be modified (FIGS. 5 and 14, paragraph[0166] of Sendai teaches in an initial state (i.e., immediately after the start of the procedure a42), the guiding section 144 displays the virtual object VO31 having a default size in a default position; thereafter, the guiding section 144 changes at least any one of the position, the size, and the shape of the virtual object VO31 to follow the motion of, for example, the part of the body of the user in the procedure a42; the default position can be set as, for example, a predetermined position (the center, etc.) on a screen or a position overlapping any real object on the screen; in the example shown in FIG. 14, the guiding section 144 changes the position and the size of the virtual object VO31 to follow the motion of the right hand RH and the left hand LH of the user; for example, if a circle formed by the right hand RH and the left hand LH gradually decreases (increases) in size, the guiding section 144 gradually reduces (increases) the diameter of the virtual object VO31; and further, if the right hand RH and the left hand RH move upward (downward), the guiding section 144 moves the virtual object VO31 upward (downward) to follow the movement of the right hand RH and the left hand LH, and See also at least ABSTRACT and paragraphs[0027]-[0028], [0037], [0061]-[0062], [0086], [0122]-[0130], [0165], and [0223] of Sendai (i.e., Sendai teaches changing, on a head mounted display worn by a user, a size (i.e., increase or decrease the size) of a virtual object such as a car)). Furthermore, Schwartz, Kitayama, Chan, and Sendai are considered to be analogous art because they are from the same field of endeavor with respect to a vehicle, and involve the same problem of suitably, which includes safely, operating the vehicle based on a pedestrian. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the method and system of Schwartz based on Kitayama, Chan, and Sendai to have a size of the vehicle presented in an extended reality device worn or carried by the pedestrian to be modified. One reason for the modification as taught by Kitayama is to suitably detect a pedestrian who is present in the periphery of a vehicle (paragraph[0002] of Kitayama). Another reason for the modification as taught by Chan is to suitably control a vehicle and output a warning signal depending on behaviors of pedestrians near the vehicle (ABSTRACT, paragraphs[0002], and [0009]-[0010] of Chan). Still another reason for the modification as taught by Rice is to have a suitable head-mounted display device with which a user can visually recognize a virtual image (ABSTRACT of Sendai). The same motivation and rationale to combine for claim 20 mentioned above, in light of corresponding statement of grounds of rejection, applies to all corresponding dependent claims mentioned in the corresponding statement of grounds of rejection. Regarding claim 21, Schwartz, Kitayama, Chan, and Sendai teach the system of claim 20, wherein causing the size of the vehicle presented in the extended reality device worn or carried by the pedestrian to be modified includes causing the size of the vehicle to be increased in the extended reality device worn or carried by the pedestrian (FIGS. 5 and 14, paragraph[0166] of Sendai teaches in an initial state (i.e., immediately after the start of the procedure a42), the guiding section 144 displays the virtual object VO31 having a default size in a default position; thereafter, the guiding section 144 changes at least any one of the position, the size, and the shape of the virtual object VO31 to follow the motion of, for example, the part of the body of the user in the procedure a42; the default position can be set as, for example, a predetermined position (the center, etc.) on a screen or a position overlapping any real object on the screen; in the example shown in FIG. 14, the guiding section 144 changes the position and the size of the virtual object VO31 to follow the motion of the right hand RH and the left hand LH of the user; for example, if a circle formed by the right hand RH and the left hand LH gradually decreases (increases) in size, the guiding section 144 gradually reduces (increases) the diameter of the virtual object VO31; and further, if the right hand RH and the left hand RH move upward (downward), the guiding section 144 moves the virtual object VO31 upward (downward) to follow the movement of the right hand RH and the left hand LH, and See also at least ABSTRACT and paragraphs[0027]-[0028], [0037], [0061]-[0062], [0086], [0122]-[0130], [0165], and [0223] of Sendai (i.e., Sendai teaches changing, on a head mounted display worn by a user, a size (i.e., increase or decrease the size) of a virtual object such as a car)). Response to Arguments Applicant's arguments filed December 24, 2025 have been fully considered but they are not persuasive. The following is a brief summary of Applicant’s arguments: In regard to currently amended claim 1, Applicants submitted that the combination of prior art of record does not disclose the following limitations: “determining a change in the eye trajectory data of the pedestrian; comparing the change in the eye trajectory data to a travel direction of the vehicle to determine whether or not the eye trajectory of the pedestrian moves toward the direction that is opposite the travel direction of the vehicle”. Examiner respectfully disagrees. In regard to the argument ‘A’ summarized above paragraph[0455] of Schwartz teaches at step 2708, processing unit 110 may determine, based on analysis of the at least one of the plurality of images and based on the identification of the eyes of the at least one pedestrian in the at least one of the plurality of images, a looking direction of the at least one pedestrian; for example, based on characteristics of features associated with or near the eye, processing unit 110 may determine a looking direction; for example, processing unit 110 may identify a pupil and determine, based on this identification, that the pedestrian is looking towards the host vehicle; and additionally or alternatively, processing unit 110 may identify the back of a head and determine, based on this identification, that the pedestrian is looking away from the host vehicle, and See also at least ABSTRACT and paragraphs[0097], [0121], [0450]-[0454], [0456]-[0461], and [0545] of Schwartz. Thus, Schwartz teaches a processing unit that determines that a looking direction of a pedestrian is towards a host vehicle, or determines a looking direction of the pedestrian is away from the host vehicle). In addition, paragraph[0115] of Kitayama teaches however, the position of a vehicle that is traveling on the roadway may be detected, and whether or not the gaze direction of the target pedestrian is facing the direction of the vehicle may be determined; and then, when the gaze direction of the target pedestrian also moves in accompaniment with the position of the vehicle moving, the target pedestrian may be determined to have performed the safety confirmation regarding vehicles, and See also at least ABSTRACT and paragraphs[0111]-[0114] of Kitayama. Thus, Kitayama teaches analyzing a gaze direction of a pedestrian with respect to a direction of a traveling vehicle. Furthermore, as mentioned above, Schwartz, Kitayama, and Chan are considered to be analogous art because they are from the same field of endeavor with respect to a vehicle, and involve the same problem of suitably, which includes safely, operating the vehicle based on a pedestrian. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the method and system of Schwartz based on Kitayama and Chan for comparing the change in the eye trajectory data to a travel direction of the vehicle to determine whether or not the eye trajectory of the pedestrian moves toward the direction that is opposite the travel direction of the vehicle; responsive to determining that the eye trajectory of the pedestrian moves toward the direction opposite the travel direction of the vehicle, causing a perception of the vehicle by the pedestrian to be modified. One reason for the modification as taught by Kitayama is to suitably detect a pedestrian who is present in the periphery of a vehicle (paragraph[0002] of Kitayama). Another reason for the modification as taught by Chan is to suitably control a vehicle and output a warning signal depending on behaviors of pedestrians near the vehicle (ABSTRACT, paragraphs[0002], and [0009]-[0010] of Chan). The same motivation and rationale to combine for claim 1 mentioned above, in light of corresponding statement of grounds of rejection, applies to all corresponding dependent claims mentioned in the corresponding statement of grounds of rejection. Also, in regard to independent claim 1 Applicant submitted that similar arguments apply to independent claim 11 and respective dependent claims. Therefore, the Examiner’s response in regard to arguments ‘A’, summarized above, also applies to the independent claim 11 and respective dependent claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDUL-SAMAD A ADEDIRAN whose telephone number is (571)272-3128. The examiner can normally be reached Monday through Thursday, 8:00 am to 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDUL-SAMAD A ADEDIRAN/Primary Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

May 22, 2024
Application Filed
Mar 06, 2025
Non-Final Rejection — §103
May 19, 2025
Interview Requested
May 29, 2025
Examiner Interview Summary
May 29, 2025
Applicant Interview (Telephonic)
May 31, 2025
Response Filed
Sep 20, 2025
Non-Final Rejection — §103
Nov 25, 2025
Interview Requested
Dec 05, 2025
Applicant Interview (Telephonic)
Dec 05, 2025
Examiner Interview Summary
Dec 24, 2025
Response Filed
Mar 30, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604613
DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12592188
PIXEL CIRCUITS AND DISPLAY PANELS
2y 5m to grant Granted Mar 31, 2026
Patent 12586527
PIXEL DRIVING CIRCUIT, DISPLAY DEVICE INCLUDING THE SAME, AND METHOD FOR DRIVING THE DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586496
DISPLAY DEVICE AND METHOD OF DRIVING A DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12572202
Determining IPD By Adjusting The Positions Of Displayed Stimuli
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
78%
Grant Probability
92%
With Interview (+13.9%)
2y 1m
Median Time to Grant
High
PTA Risk
Based on 617 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month