Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
The amendment and response filed on November 20, 2025, to the Non-Final Office Action dated September 12, 2025 has been entered. Claims 1, 11, 14, 15, 18, 19, and 22 are currently amended; Claims 8, 10, 12, and 20-21 are cancelled. Claims 1-7, 9, 11, 13-19, and 22-25 are pending in the present application, including independent claims 1 and 22.
Response to Arguments
Applicant’s arguments and amendments, see pages 11-13, filed November 20, 2025, with respect to the 35 U.S.C. § 103 rejection based on Koichi SUZUKI (US-20220390251-A1), Yasuda et al (US- 20190389368-A1), and Zhang et al (US-20200174472-A1) have been considered but are not persuasive. The 35 U.S.C. § 102 rejection of claims 1-7, 9, 11, 13-19, and 22-25 is maintained for reasons explained below.
Independent claims 1 & 22 has been amended to incorporate a limitation of claim 11 to recite that "projecting the to-be-projected pattern in real time during an operating process, and acquiring an obstacle region existing on the road surface during the operating process; and detecting whether there exists an overlap region between the to-be-projected pattern and the obstacle region, adjusting the to-be-projected pattern according to the overlap region when there exists the overlap region between the to-be-projected pattern and the obstacle region, such that there is no overlap region between the to-be-projected pattern and the obstacle region". See Remarks at Page 11.
As noted by Applicant, Remarks at Page 12, the difference between the claimed invention an the applied reference of Suzuki (citation above) is that when encountering an overlap region, the Suzuki application “the projection of the to-be-projected pattern is stopped”; in contrast, the claimed invention when encountering an overlap the “projected pattern is adjusted according to the overlap region before being projected, rather than stopping projection.”
An adjustment in the current application , see Para. [0264] of U.S. Patent Publication US-2024-0308058-A1, is defined as a projection where “there is no overlap region between the to-be-projected pattern and the obstacle region.” Because the stopping of the projection, as taught by Suzuki, would achieve the same result of insuring that there is no overlapping region the rejection of the claims based on the applied prior art is maintained.
Claim Rejections -- 35 U.S.C. § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 4-7, 9, 11, 13-19, and 22-25 are rejected under 35 U.S.C. 103 as being unpatentable over Koichi SUZUKI (US-20220390251-A1)(“Suzuki”) and Yasuda et al (US- 20190389368-A1)(“Yasuda”).
As per claim 1, Suzuki discloses an interaction method for a mobile robot, the mobile robot being provided with a projection device and an environmental perception sensor (Figure 10), the method comprising:
acquiring map data information of a space where the (Suzuki at Para. [0030] discloses that map data is used in the course of navigation for the vehicle:” navigation-related information may be road map information, a planned course (path) of the vehicle, or the like. The controller predicts the course (path) of the vehicle (in particular, a change in the direction of travel of the vehicle, e.g., a right turn or left turn of the vehicle at, for example, an intersection) based at least on the navigation-related information.”) and real-time environmental perception data collected by the environmental perception sensor (Suzuki at Para. [0111] discloses a perception sensor to ascertain the presence of a pedestrian:” pedestrian sensor 33 is a sensor that detects pedestrians in the vicinity of the vehicle 1. To be specific, when it detects a person in front of the vehicle 1, it outputs information on the location of the person as sensor data.”), ;
acquiring target traveling path information of the (Suzuki at Para. [0030] discloses using navigation related data to chart a path:” Navigation-related information is information used to navigate the driver of a vehicle. The navigation-related information may be road map information, a planned course (path) of the vehicle, or the like. The controller predicts the course (path) of the vehicle (in particular, a change in the direction of travel of the vehicle, e.g., a right turn or left turn of the vehicle at, for example, an intersection) based at least on the navigation-related information.”), and determining a ground projection region according to the target traveling path information and the real-time indication information (Suzuki at Figure 3A, projector 20 and region 301, and Para. [0048] disclosing projection and targeting at a specific area to message a pedestrian:” projector 20 is configured to change the angle of irradiation with light, thereby changing the position where the image is projected within a predetermined area. FIG. 3B is a diagram for explaining the area (reference numeral 302) onto which the guide image can be projected. The projector 20 is configured to be able to adjust the pitch angle and yaw angle as the angle of irradiation with light, which enables an image to be projected onto any position on the XY plane.”);
acquiring a to-be-projected pattern, and determining a projection parameter corresponding to the to-be-projected pattern according to the to-be-projected pattern and the ground projection region, wherein the to-be-projected pattern is configured to indicate a traveling intention of the mobile robot (Suzuki at Para. [0049] discloses projecting the travelling intention of the vehicle to message a pedestrian thus mitigating collision:” at the time when the vehicle 1 enters an intersection, the projector 20 determines an arbitrary point in the intersection as a point onto which a guide image is to be projected, and projects the guide image onto the point. FIG. 3C is a schematic view of the positional relationship between the vehicle 1 and the road surface viewed from the vertical direction. The reference numeral 303 indicates the point onto which the guide image showing the course of the vehicle 1 is projected.”); and
controlling the projection device according to the projection parameter to project the to-be- projected pattern onto the ground projection region (Suzuki at Figure 10, process to add a message and project the message on a surface such as a road so is visible to a pedestrian, and Para. [0053] discloses controlling the projection of a message onto a region:” controller 201 normally operates the projector 20 in the first mode, and upon reception of an instruction to project a guide image from the in-vehicle device 10, switches it to the second mode. In the second mode, the projection is controlled based on the data received from the in-vehicle device 10. Upon reception of an instruction from the in-vehicle device 10 to terminate the projection of the guide image, the controller 201 switches it to the first mode.”).
While using a pedestrian sensor (sensor 33 at Figure 9) and the use of obstacle perception data by the projection controller, at Figure 9, Suzuki does not explicitly disclose a mobile robot and wherein the real-time environmental perception data comprises real-time obstacle information and real-time indication information for indicating a road condition around the mobile robot.
Yasuda in the same field discloses projecting an image on a road surface according to the behavior of a self-vehicle to a pedestrian or the like that is in the proximity. See at least Figure 14.
In particular, Yasuda discloses that message projection is applicable to mobile robots (at Para. [0152] various vehicle that projection messaging is applicable to:” It is needless to say that various changes can be made thereto within the scope not departing from the gist thereof. For example, although the embodiments have been described by taking the vehicle for example, a bicycle, a bike, a robot, a hovercraft, etc. may be adopted as long as they are mobiles each moved along a moving surface.”).
Additionally, Yasuda discloses wherein the real-time environmental perception data comprises real-time obstacle information (Yasuda at Para. [0082] discloses perception of objects around the vehicle:” the message image projecting system 10 may have a range sensor which measures a distance to an object around the vehicle 1. In that case, the movement information may be one including distance information of the object acquired by the range sensor.”) and real-time indication information for indicating a road condition around the mobile robot (Yasuda at Para. [0109] discloses adjusting characteristics of the message based on road conditions:” the semiconductor device 200 may store in advance, an adjustment signal of an image for each typical pattern of the road surface and adjust the message image by the adjustment signal stored in advance. That is, the message image projecting system 20 stores in advance, several patterns of typical adjustment signals in the external memory or the internal memory. The typical adjustment signals are adjustment signals generated considering in advance reflectivity due to road surface conditions of concrete, asphalt or stone pavements, on on-snow, and in fine and rainy weather, etc. for example.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the messaging method applicable to vehicles and robots taught in Yasuda in the information processor in Suzuki with a reasonable expectation of success, because this results in a robots being utilized to convey to pedestrians walking on an adjacent surface an operational status message and future movement of the robot/vehicle, thereby conveying direction and speed to mitigate collision between pedestrians and vehicles at intersection’s(see Yasuda Para. [0004].).
projecting the to-be-projected pattern in real time during an operating process, and acquiring an obstacle region existing on the road surface during the operating process (Suzuki at Para. [0122] discloses tracking a pedestrian while projecting a message:” the direction of travel of the pedestrian may also be determined by tracking changes in the pedestrian's location based on sensor data periodically acquired from the pedestrian sensor 33.”);
detecting whether there exists an overlap region between the to-be-projected pattern and the obstacle region, adjusting the to-be-projected pattern according to the overlap region when there exists the overlap region between the to-be-projected pattern and the obstacle region, such that there is no overlap region between the to-be-projected pattern and the obstacle region (Suzuki at Para. [0143] discloses stopping the projection if overlaps or is to be projected on an obstacle:” the projection controller 1012 may determine whether there is an obstacle to the projection of the guide image, and stop the projection of the guide image if there is an obstacle to the projection of the guide image.”);
As per claim 4, Suzuki and Yasuda disclose an interaction method according to claim 1, wherein the acquiring the target traveling path information of the mobile robot based on the real-time obstacle information and the map data information comprises:
determining a real-time position of the mobile robot and a position of the obstacle according to the map data information and the real-time obstacle information (Suzuki at Para. [0099] discloses determining the relative position between vehicle and pedestrian:” where a change in the direction of travel is predicted for a moving vehicle, the projector projects an image visually indicating the change onto a road surface. This makes it possible to efficiently convey information on the course of the vehicle to pedestrians and others located in the vicinity of the vehicle 1, as illustrated in FIG. 3B.”);
acquiring a target final position of the mobile robot, determining shortest path information from the real-time position to the target final position based on the real-time position and the position of the obstacle, and determining the shortest path information as the target traveling path information of the mobile robot (Suzuki at Para. [0103] discloses the use of a navigational system to travel which would require a destination and path such as shortest or longest to control the vehicle:” navigation unit 1013 provides a navigation function to the occupants of the vehicle. To be specific, it searches for and navigates courses to the destination based on the road map data 102B stored in the storage 102 and the positional information acquired by the GPS module 105. The navigation unit 1013 may be configured to be communicable with the GPS module 105. The navigation unit 1013 may also have a unit (such as a communication module) for acquiring traffic information from outside.”).
As per claim 5, Suzuki and Yasuda disclose an interaction method according to claim 1, wherein the determining the projection parameter corresponding to the to-be-projected pattern according to the to-be- projected pattern and the ground projection region comprises:
for each pixel point in the to-be-projected pattern, determining a projection angle, projection time, and a projection color corresponding to each pixel point according to the ground projection region (Suzuki at Para. [0055] discloses a digital micromirror for projecting a pixel pattern to convey a message to a pedestrian:” DLP 203, controlling the tilt angle of each micro-mirror creates, on pixel basis, portions of the road surface that are irradiated with light and portions that are not.”);
determining the projection angle, the projection time, and the projection color corresponding to each pixel point as projection parameters of the to-be-projected pattern (Suzuki at Para. [0096] discloses determining a projection angle to project a pattern on a road surface:” the projection controller 1012 calculates the angle of irradiation with light based on the positional relationship between the vehicle 1 and the projection point that was determined in Step S14, and transmits the calculated angle of irradiation to the projector 20 (controller 201). The controller 201 controls the projection of light based on the received angle of irradiation.”).
As per claim 6, Suzuki and Yasuda disclose an interaction method according to claim 5, wherein the projection device comprises a galvanometer, a visible laser, and a lens, and the controlling the projection device according to the projection parameters to project the to-be-projected pattern onto the ground projection region (Suzuki at Figure 2, DLP 203, and Para. [0055] discloses a digital micromirror device such as a galvanometer to project a pattern:” digital light processing unit (DLP) 203 is a unit that performs digital light processing. The DLP 203 includes multiple micro-mirror devices (digital mirror devices) arranged in an array.”) comprises:
determining a rotation angle of the galvanometer corresponding to each pixel point according to the projection angle corresponding to each pixel point, determining laser emission information of the visible laser and laser synthesis information of the lens corresponding to each pixel point according to the projection color corresponding to each pixel point (Suzuki at Para. [0055] discloses controlling the rotation which under the broadest interpretation is a tilt of a mirror to create a pattern at desired spot on the road surface:” the DLP 203, controlling the tilt angle of each micro-mirror creates, on pixel basis, portions of the road surface that are irradiated with light and portions that are not. In addition, regarding the DLP 203, controlling the operation time of each mirror by pulse-width modulation (PWM) creates contrast between pixels.”);
determining a projection sequence of each pixel point according to the projection time corresponding to each pixel point (Suzuki at Para. [0050] determining a projection sequence for each pixel point:” calculating the angle of irradiation with light based on the positional relationship between the vehicle 1 and the projection point 304 and dynamically changing the angle of irradiation enables control so that the guide image can be kept being projected onto a predetermined point even if the vehicle 1 moves.”);
in accordance with the projection sequence of each pixel point, adjusting the projection device according to the rotation angle of the galvanometer, the laser emission information, and the laser synthesis information of the lens corresponding to each pixel point, to project the to-be- projected pattern onto the ground projection region (Suzuki at Para. [0053] discloses controlling the mirror and laser in the micromirror to project a message:” controller 201 normally operates the projector 20 in the first mode, and upon reception of an instruction to project a guide image from the in-vehicle device 10, switches it to the second mode. In the second mode, the projection is controlled based on the data received from the in-vehicle device 10. Upon reception of an instruction from the in-vehicle device 10 to terminate the projection of the guide image, the controller 201 switches it to the first mode.”).
As per claim 7, Suzuki and Yasuda disclose an interaction method according to claim 1, further comprising:
before determining the ground projection region according to the target traveling path information and the real-time indication information, determining whether a preset projection condition is satisfied according to the target traveling path information and the real-time environmental perception data (Suzuki at Para. [0064] discloses that the controller determines when a vehicle is approaching a critical point like an intersection to ascertain whether a message needs to be projected:” the prediction unit 1011 uses the position information received from a GPS module 105 and the road map data recorded in a road map data 102B and determines, for example, that the vehicle is approaching an intersection. Furthermore, it predicts that the vehicle will make a right or left turn at the intersection based on the sensor data acquired from a blinker sensor 31 and a speed sensor 32 which will be described below.”);
wherein, correspondingly, the determining the ground projection region according to the target traveling path information (Suzuki at Para. [0067] discloses the area to project a message on:” when it is predicted that the vehicle will make a right or left turn at an intersection or the like, the projection controller 1012 extracts a guide image suitable for that direction from the image data 102A and projects the guide image through the projector 20. “)comprises:
when a determination result indicates that the preset projection condition is satisfied, determining the ground projection region according to the target traveling path information (Suzuki at Figure 5 and Para. [0093] determining a projection point:” in Step S14, the projection controller 1012 determines the point onto which the guide image is to be projected (hereinafter referred to as “projection point”).”);
wherein the preset projection condition comprises at least one of (Suzuki at Figure 5 see conditions S12 and S16.):
a traveling direction of the mobile robot changes within a preset time period in the future, a traveling state of the mobile robot is a paused state, there exists a pedestrian around the mobile robot, or the mobile robot is currently in an operating state (Suzuki at Para. [0099] discloses using a change in direction such as turning left or right:” vehicle system according to the first embodiment, in the case where a change in the direction of travel is predicted for a moving vehicle, the projector projects an image visually indicating the change onto a road surface.”).
As per claim 9, Suzuki and Yasuda disclose an interaction method according to claim 7, wherein when the preset projection condition is that the mobile robot is currently in the operating state, the acquiring the to-be-projected pattern comprises:
determining whether the pattern currently projected by the mobile robot is capable of reflecting the traveling intention of the mobile robot according to the target traveling path information (Suzuki at Para. [0097] discloses making adjustment to the projection based changes:” the vehicle 1 starts operating for a right or left turn within an intersection, the direction of the guide image may change along with the direction of the vehicle body as illustrated in FIG. 7A. To prevent this, the projection controller 1012 may detect the direction of the vehicle body and make corrections by rotating the guide image based on the results of the detection.”);
when the pattern currently projected by the mobile robot is capable of reflecting the traveling intention of the mobile robot, determining the pattern currently projected by the mobile robot as the to-be-projected pattern (Suzuki at Para. [0141] projecting a traveling intention to pedestrians and other vehicles:” when the vehicle 1 is crossing an intersection, the vehicle 1 can alert vehicles traveling into the intersection by projecting its course (reference numeral 1301) on the road surface. In this case, depending on the operational status of the blinker of the vehicle 1, the guide images corresponding to “turning left,” “going straight,” and “turning right” may be projected on the road surface.”);
when the pattern currently projected by the mobile robot is incapable of reflecting the traveling intention of the mobile robot, generating the to-be-projected pattern according to the traveling intention of the mobile robot (Suzuki at Para. [0125] discloses projecting intention of the vehicle to be recognizable by the pedestrian:” FIG. 11B illustrates an example case where a message image stating that the vehicle 1 will pause is added to the guide image being projected. The message is oriented parallel to the direction of travel of the pedestrian. This configuration makes it possible for pedestrians to recognize the message regardless of their directions of travel (directions in which they cross).”);
when the pattern currently projected by the mobile robot is incapable of reflecting the traveling intention of the mobile robot, generating the to-be-projected pattern according to the traveling intention of the mobile robot (Suzuki at Para. [0031] discloses projection of message based on the direction of travel:” controller also projects a first image related to the predicted course onto the road surface located in front of the vehicle. The first image is typically an image that visually indicates the course (direction of travel) of the vehicle, such as an arrow. The image may also include text and icons.”);
wherein the determining whether the pattern currently projected by the mobile robot is capable of reflecting the traveling intention of the mobile robot according to the target traveling path information comprises (Suzuki at Para. [0088] discloses predicting the travel path to project the correct message:” Although whether or not the vehicle 1 will make a right or left turn is predicted in Step S11 in this example, the target to be predicted is not limited to right or left turns as long as it involves a course change.”):
when the real-time obstacle information indicates that the obstacle around the mobile robot is a movable obstacle, determining whether the pattern currently projected by the mobile robot is capable of reflecting the traveling intention of the mobile robot according to the target traveling path information (Suzuki at Para. [0119] discloses tailoring the projected message using the movement of the pedestrian (obstacle):” pedestrian sensor 33 has detected a crossing pedestrian. [0120] For example, if the detected pedestrian is located in the roadway or is traveling from the sidewalk toward the roadway, the pedestrian can be determined as a crossing pedestrian.”).
As per claim 11, Suzuki and Yasuda disclose an interaction method according to claim 1, further comprising:
wherein the acquiring the obstacle region existing on the road surface during the operating process (Suzuki at Para. [0122] discloses using pedestrian sensor 33 for tracking:” tracking changes in the pedestrian's location based on sensor data periodically acquired from the pedestrian sensor 33.”) comprises:
collecting obstacle information in real time during the operating process, and mapping pixel information corresponding to the obstacle information into a preset projection pattern (Suzuki at Para. [0131] discloses collecting information in real time:” although the sensor data acquired from the blinker sensor 31 and the pedestrian sensor 33 are used to detect pedestrians crossing the course of the vehicle 1 in this embodiment, the presence or absence of pedestrians crossing the course of the vehicle 1 may be detected also by using other sensors.”);
determining a region with a minimum area comprising all the pixel information from the projection region, and recording the region with the minimum area as the obstacle region (Suzuki at Para. [0048] discloses an area where the image is projected onto:” projector 20 is configured to change the angle of irradiation with light, thereby changing the position where the image is projected within a predetermined area. FIG. 3B is a diagram for explaining the area (reference numeral 302) onto which the guide image can be projected.”).
As per claim 13, Suzuki and Yasuda disclose an interaction method according to claim 11, wherein the to-be-projected pattern comprises an initial to-be-projected pattern and different magnified to-be-projected patterns generated at different moments and at different magnification scales, and the projecting the to-be-projected pattern in real time during the operating process (Suzuki at Figure 11B, message image and guide image, and Para. [0125] discloses projecting different patterns with individual scales to inform a pedestrian:” FIG. 11B illustrates an example case where a message image stating that the vehicle 1 will pause is added to the guide image being projected. The message is oriented parallel to the direction of travel of the pedestrian. This configuration makes it possible for pedestrians to recognize the message regardless of their directions of travel (directions in which they cross).”) comprises:
projecting the initial to-be-projected pattern, and arranging and projecting the magnified to-be-projected patterns generated and the initial to-be-projected pattern at different moments (Suzuki at Paras. [0108]-[0109] discloses the projection of the guide and message images at different moments:” when the vehicle 1 makes a right or left turn, the guide image is projected onto a road surface. In contrast, the second embodiment detects the presence or absence of a pedestrian crossing the course of the vehicle 1 and outputs an image containing a message for the pedestrian (hereinafter referred to as “message image” and corresponding to the second image in the present disclosure) simultaneously with the guide image. [0109] The message image is an image for conveying the intentions of the driver of the vehicle 1, such as “the vehicle is pausing,” “giving way to pedestrians,” or “giving way” to pedestrians and others. The message image does not necessarily have to contain text, as long as it can convey the driver's intention.”).
As per claim 13, Suzuki and Yasuda disclose an interaction method according to claim 11, wherein the to-be-projected pattern comprises at least one of the initial to-be-projected pattern and the magnified to-be- projected patterns, the magnified to-be-projected patterns are obtained by magnifying the initial to-be-projected pattern according to a preset magnification scale, the projecting the to-be- projected pattern in real time during the operating process (Suzuki at Paras. [0108]-[0109].) comprises:
projecting the at least one of the initial to-be-projected pattern and the the magnified to-be- projected patterns in the operating process (Suzuki at Para. [0037] discloses projecting messages, see 11B, on a surface:” Digital light processing is technology for irradiating with light on pixel basis by controlling multiple micro-mirrors. The projector 20 functions as a front light of the vehicle 1 and also has the function of projecting any image onto a road surface. The projector 20 is also called “adaptive headlight unit”.”).
As per claim 13, Suzuki and Yasuda disclose an interaction method according to claim 13, wherein the adjusting the to-be-projected pattern according to the overlap region comprises:
determining two curve intersection points of an overlap to-be-projected pattern and the obstacle region in the overlap region, wherein the overlap to-be-projected pattern refers to the initial to-be-projected pattern or the magnified to-be-projected pattern (Suzuki at Figure 11A, curved arrow in front of obstacle/pedestrian crossing, and Para. [0124] disclosing how the images are curved to insure that pedestrian sees the embedded message:” FIG. 11A illustrates an example case where a message image to encourage the pedestrian to cross is added to the guide image being projected. The message image is oriented to face the pedestrian. With this configuration, pedestrians trying to cross can easily recognize the message.”);
removing a line segment between the two curve intersection points in the overlap to-be- projected pattern, and obtaining two remaining curve segments in the overlap to-be-projected pattern after the removing of the line segment (Suzuki at Figure 11A, guide and message image shown with line segments, and additionally in Figure 11A the segments are removed and images are combined to form one unitary message.) ;
determining a mid-perpendicular intersection point corresponding to a connecting line between the two curve intersection points (Suzuki at Figures 3C-3D, and Para. [0050] discloses using the angles to combine the lights to form a pattern:” calculating the angle of irradiation with light based on the positional relationship between the vehicle 1 and the projection point 304 and dynamically changing the angle of irradiation enables control so that the guide image can be kept being projected onto a predetermined point even if the vehicle 1 moves.”);
detecting a vertical distance between the mid-perpendicular intersection point and a boundary intersection point, and comparing the vertical distance to a preset distance threshold value, wherein the boundary intersection point refers to an intersection point of the mid- perpendicular and an edge of the obstacle region, and the boundary intersection point is located within the curve overlap region (Suzuki at Para. [0050] disclosing that the vehicle maintains a vertical image and during movement the pattern is maintained while the vehicle moves:” if the projector 20 is capable of projecting an image up to 30 meters away, it can start projecting the guide image at the time when the vehicle 1 approaches up to 30 meters in front of the intersection, and continue projecting the guide image onto the same point until the vehicle 1 passes.”);
when the vertical distance is less than or equal to the preset distance threshold value, adjusting the to-be-projected pattern according to the two remaining curve segments, the curve intersection points, and the boundary intersection point, to obtain the adjusted to-be-projected pattern, wherein there is no overlap region between the adjusted to-be-projected pattern and the obstacle region (Suzuki at Para. [0049] discloses dynamically changing the angle of irradiation to maintain the pattern at a fixed position:” at the time when the vehicle 1 enters an intersection, the projector 20 determines an arbitrary point in the intersection as a point onto which a guide image is to be projected, and projects the guide image onto the point. FIG. 3C is a schematic view of the positional relationship between the vehicle 1 and the road surface viewed from the vertical direction. The reference numeral 303 indicates the point onto which the guide image showing the course of the vehicle 1 is projected.”).
As per claim 16, Suzuki and Yasuda disclose an interaction method according to claim 14, wherein the adjusting the to- be-projected pattern according to the overlap region comprises:
recording the initial to-be-projected pattern or the magnified to-be-projected pattern having the overlap region with the obstacle region as the overlap to-be-projected pattern, wherein the overlap to-be-projected pattern comprises the overlap region overlapping with the obstacle region and a remaining region which does not overlap with the obstacle region (Suzuki at Figure 4, stored patterns, and Para. [0067] disclosing that a pattern is extracted based on the vehicle maneuver indicating that the patterns are recorded/stored for later use:” when it is predicted that the vehicle will make a right or left turn at an intersection or the like, the projection controller 1012 extracts a guide image suitable for that direction from the image data 102A and projects the guide image through the projector 20. The specific processing will be explained below.”);
removing the overlap region of the overlap to-be-projected pattern, or reducing the overlap to-be-projected pattern according to a preset scale to allow the overlap to-be-projected pattern to be tangent to the edge of the obstacle region, to obtain the adjusted to-be-projected pattern (Suzuki at Figure 3C, pattern 303 is tangent to crossing pedestrian, and Para. [0055] discloses adjustment to each mirror to project at pattern that is tangent and perpendicular to the path of the pedestrian:” Regarding the DLP 203, controlling the tilt angle of each micro-mirror creates, on pixel basis, portions of the road surface that are irradiated with light and portions that are not. In addition, regarding the DLP 203, controlling the operation time of each mirror by pulse-width modulation (PWM) creates contrast between pixels. In other words, the DLP 203 functions as a display device that modulates light to produce images.”).
As per claim 17, Suzuki and Yasuda disclose an interaction method according to claim 15, wherein the adjusting the to- be-projected pattern according to the two remaining curve segments, the curve intersection points, and the boundary intersection point to obtain the adjusted to-be-projected pattern comprises:
connecting the two curve intersection points to the boundary intersection point through a preset connection mode to obtain a connecting line segment (Suzuki at Figures 12A-12B and Para. [0037] discloses that controlling the micro-mirrors can cause a pattern to be displayed that can be straight or curved to convey the actions of the vehicle:” Digital light processing is technology for irradiating with light on pixel basis by controlling multiple micro-mirrors. The projector 20 functions as a front light of the vehicle 1 and also has the function of projecting any image onto a road surface. The projector 20 is also called “adaptive headlight unit”.”);
recording the to-be-projected pattern formed by connecting the two remaining curve segments with the connecting line segment as the adjusted to-be-projected pattern (Suzuki at Figure 4, stored patterns, and Para. [0067] disclosing that a pattern is extracted based on the vehicle maneuver indicating that the patterns are recorded/stored for later use:” when it is predicted that the vehicle will make a right or left turn at an intersection or the like, the projection controller 1012 extracts a guide image suitable for that direction from the image data 102A and projects the guide image through the projector 20. The specific processing will be explained below.”).
As per claim 18, Suzuki and Yasuda disclose an interaction method according to claim 15, further comprising:
after comparing the vertical distance to the preset distance threshold value, stop the projection of the to-be-projected pattern when the vertical distance is greater than the preset distance threshold value (Suzuki at Para. [0050] discloses a preset distance (30 meters) and that the projection is maintained for that distance and then terminated after it passes the preset distance:” if the projector 20 is capable of projecting an image up to 30 meters away, it can start projecting the guide image at the time when the vehicle 1 approaches up to 30 meters in front of the intersection, and continue projecting the guide image onto the same point until the vehicle 1 passes.”).
As per claim 19, Suzuki and Yasuda disclose an interaction method according to claim 11, further comprising:
after adjusting the to-be-projected pattern according to the overlap region, acquire current position information of the robot, and determine a position distance between the robot and the obstacle region according to the current position information (Suzuki at Para. [0111] discloses acquiring information as to the position of a pedestrian relative to the vehicle:” pedestrian sensor 33 is a sensor that detects pedestrians in the vicinity of the vehicle 1. To be specific, when it detects a person in front of the vehicle 1, it outputs information on the location of the person as sensor data. The pedestrian sensor 33 may be an image sensor, a stereo camera, or the like, for example. The objects to be detected by the pedestrian sensor 33 may include light vehicles such as bicycles.”);
determine a color parameter of the adjusted to-be-projected pattern according to the position distance, and project the adjusted to-be-projected pattern according to the color parameter (Suzuki at Para. [0070] discloses that various images are contemplated including color images:” multiple images that differ depending on the course taken by the vehicle 1, such as “turning right,” “turning left,” “going in a right diagonal direction,” “going in a left diagonal direction,” or the like, are stored. The images may be binary images, grayscale images, color images, and the like.”).
As per claim 22, Suzuki discloses mobile robot, comprising a projection device (Figure 1, projector 20.), an environmental perception sensor, and a processor (Figure 2, controller 1011),
the processor is configured to acquire map data information of a space where the (Suzuki at Para. [0030] discloses that map data is used in the course of navigation for the vehicle:” navigation-related information may be road map information, a planned course (path) of the vehicle, or the like. The controller predicts the course (path) of the vehicle (in particular, a change in the direction of travel of the vehicle, e.g., a right turn or left turn of the vehicle at, for example, an intersection) based at least on the navigation-related information.”) ,
acquire target traveling path information of the (Suzuki at Para. [0030] discloses using navigation related data to chart a path:” Navigation-related information is information used to navigate the driver of a vehicle. The navigation-related information may be road map information, a planned course (path) of the vehicle, or the like. The controller predicts the course (path) of the vehicle (in particular, a change in the direction of travel of the vehicle, e.g., a right turn or left turn of the vehicle at, for example, an intersection) based at least on the navigation-related information.”), determine a ground projection region according to the target traveling path information and the real-time indication information (Suzuki at Figure 3A, projector 20 and region 301, and Para. [0048] disclosing projection and targeting at a specific area to message a pedestrian:” projector 20 is configured to change the angle of irradiation with light, thereby changing the position where the image is projected within a predetermined area. FIG. 3B is a diagram for explaining the area (reference numeral 302) onto which the guide image can be projected. The projector 20 is configured to be able to adjust the pitch angle and yaw angle as the angle of irradiation with light, which enables an image to be projected onto any position on the XY plane.”),
acquire a to-be-projected pattern (Suzuki at Figure 4 showing examples of possible patterns.), and
determine a projection parameter corresponding to the to-be-projected pattern according to the to-be-projected pattern and the ground projection region, the to-be-projected pattern being configured to indicate a traveling intention of the mobile robot (Suzuki at Para. [0049] discloses projecting the travelling intention of the vehicle to message a pedestrian thus mitigating collision:” at the time when the vehicle 1 enters an intersection, the projector 20 determines an arbitrary point in the intersection as a point onto which a guide image is to be projected, and projects the guide image onto the point. FIG. 3C is a schematic view of the positional relationship between the vehicle 1 and the road surface viewed from the vertical direction. The reference numeral 303 indicates the point onto which the guide image showing the course of the vehicle 1 is projected.”), and
control the projection device according to the projection parameter to project the to-be-projected pattern onto the ground projection region (Suzuki at Figure 10, process to add a message and project the message on a surface such as a road so is visible to a pedestrian, and Para. [0053] discloses controlling the projection of a message onto a region:” controller 201 normally operates the projector 20 in the first mode, and upon reception of an instruction to project a guide image from the in-vehicle device 10, switches it to the second mode. In the second mode, the projection is controlled based on the data received from the in-vehicle device 10. Upon reception of an instruction from the in-vehicle device 10 to terminate the projection of the guide image, the controller 201 switches it to the first mode.”);
the projection device is configured to project the to-be-projected pattern onto the ground projection region (Suzuki at Para. [0049] discloses projecting the travelling intention of the vehicle to message a pedestrian thus mitigating collision:” at the time when the vehicle 1 enters an intersection, the projector 20 determines an arbitrary point in the intersection as a point onto which a guide image is to be projected, and projects the guide image onto the point. FIG. 3C is a schematic view of the positional relationship between the vehicle 1 and the road surface viewed from the vertical direction. The reference numeral 303 indicates the point onto which the guide image showing the course of the vehicle 1 is projected.”).
While using a pedestrian sensor (sensor 33 at Figure 9) and the use of obstacle perception data by the projection controller, at Figure 9, Suzuki does not explicitly disclose a mobile robot and wherein the real-time environmental perception data comprises real-time obstacle information and real-time indication information for indicating a road condition around the mobile robot.
Yasuda in the same field discloses projecting an image on a road surface according to the behavior of a self-vehicle to a pedestrian or the like that is in the proximity. See at least Figure 14.
In particular, Yasuda discloses that message projection is applicable to mobile robots (at Para. [0152] various vehicle that projection messaging is applicable to:” It is needless to say that various changes can be made thereto within the scope not departing from the gist thereof. For example, although the embodiments have been described by taking the vehicle for example, a bicycle, a bike, a robot, a hovercraft, etc. may be adopted as long as they are mobiles each moved along a moving surface.”).
Additionally, Yasuda discloses wherein the real-time environmental perception data comprises real-time obstacle information (Yasuda at Para. [0082] discloses perception of objects around the vehicle:” the message image projecting system 10 may have a range sensor which measures a distance to an object around the vehicle 1. In that case, the movement information may be one including distance information of the object acquired by the range sensor.”) and real-time indication information for indicating a road condition around the mobile robot (Yasuda at Para. [0109] discloses adjusting characteristics of the message based on road conditions:” the semiconductor device 200 may store in advance, an adjustment signal of an image for each typical pattern of the road surface and adjust the message image by the adjustment signal stored in advance. That is, the message image projecting system 20 stores in advance, several patterns of typical adjustment signals in the external memory or the internal memory. The typical adjustment signals are adjustment signals generated considering in advance reflectivity due to road surface conditions of concrete, asphalt or stone pavements, on on-snow, and in fine and rainy weather, etc. for example.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the messaging method applicable to vehicles and robots taught in Yasuda in the information processor in Suzuki with a reasonable expectation of success, because this results in a robots being utilized to convey to pedestrians walking on an adjacent surface an operational status message and future movement of the robot/vehicle, thereby conveying direction and speed to mitigate collision between pedestrians and vehicles at intersection’s(see Yasuda Para. [0004].).
the processor is further configured to project the to-be-projected pattern in real time during an operating process, acquire an obstacle region existing on the road surface during the operating process (Suzuki at Para. [0122] discloses tracking a pedestrian while projecting a message:” the direction of travel of the pedestrian may also be determined by tracking changes in the pedestrian's location based on sensor data periodically acquired from the pedestrian sensor 33.”), detect whether there exists an overlap region between the to-be-projected pattern and the obstacle region, and adjust the to-be-projected pattern according to the overlap region when there exists the overlap region between the to-be-projected pattern and the obstacle region, such that there is no overlap region between the to-be-projected pattern and the obstacle region (Suzuki at Para. [0143] discloses stopping the projection if overlaps or is to be projected on an obstacle:” the projection controller 1012 may determine whether there is an obstacle to the projection of the guide image, and stop the projection of the guide image if there is an obstacle to the projection of the guide image.”).
As per claim 23, Suzuki and Yasuda disclose a mobile robot according to claim 22, wherein the processor is further configured to:
determine whether a preset projection condition is satisfied according to the target traveling path information and the real-time environmental perception data, wherein the preset projection condition at least comprises one (Suzuki at Para. [0082] discloses a change in direction as a preset condition:” the prediction unit 1011 predicts whether or not the course of the vehicle will change within a predetermined period of time. In this embodiment, a right or left turn is illustrated as a course change.”) of:
a traveling direction of the mobile robot changes within the preset time period in the future, a traveling state of the mobile robot is a paused state, there exists a pedestrian around the mobile robot, and the mobile robot is currently in an operating state (Suzuki at Figures 11A-12B showing various conditions for generating a pattern and Para. [0114] discloses:” the projection controller 1012 determines the presence of a pedestrian crossing the course of the vehicle based on the sensor data acquired from the pedestrian sensor 33. In this step, a positive determination is made when all of the following conditions are met.”);
determine the ground projection region according to the target traveling path information when a determination result indicates that the preset projection condition is satisfied (Suzuki at Figure 4 and Para. [0067] disclosing that a pattern is extracted based on the vehicle maneuver indicating that the patterns are recorded/stored for later use:” when it is predicted that the vehicle will make a right or left turn at an intersection or the like, the projection controller 1012 extracts a guide image suitable for that direction from the image data 102A and projects the guide image through the projector 20. The specific processing will be explained below.”).
As per claim 24, Suzuki and Yasuda disclose a mobile robot according to claim 23, wherein the processor is further configured to:
determine whether a pattern currently projected by the mobile robot is capable of reflecting the traveling intention of the mobile robot according to the target traveling path information when the preset projection condition is that the mobile robot is currently in the operating state (Suzuki at Para. [0077] disclosing operating state as basis for projecting a pattern:” blinker sensor 31 is a sensor that outputs the operational status (e.g., “left,” “right,” or “off”) of the blinkers of the vehicle 1.”) ;
determine the pattern currently projected by the mobile robot as the to-be-projected pattern when the pattern currently projected by the mobile robot is capable of reflecting the traveling intention of the mobile robot (Suzuki at Para. [0141] projecting a traveling intention to pedestrians and other vehicles:” when the vehicle 1 is crossing an intersection, the vehicle 1 can alert vehicles traveling into the intersection by projecting its course (reference numeral 1301) on the road surface. In this case, depending on the operational status of the blinker of the vehicle 1, the guide images corresponding to “turning left,” “going straight,” and “turning right” may be projected on the road surface.”);
generate the to-be-projected pattern according to the traveling intention of the mobile robot when the pattern currently projected by the mobile robot is incapable of reflecting the traveling intention of the mobile robot (Suzuki at Para. [0143] when another vehicle is in front of the projection unit it makes it incapable of conveying the traveling intent so the pattern is not projected even after it is generated:” the projection controller 1012 may determine whether there is an obstacle to the projection of the guide image, and stop the projection of the guide image if there is an obstacle to the projection of the guide image. For example, if there is a vehicle in front of the vehicle 1 and the guide image cannot be projected, the projection of the guide image may be temporarily stopped.”).
As per claim 25, Suzuki and Yasuda disclose a computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, causes the processor to implement the steps of the interaction method of claim 1 (See above rejection of claim 1.).
Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Suzuki and Yasuda as applied to claim 1 above, and further in view of Zhang et al (US-20200174472-A1)(“Zhang”).
As per claim 2, Suzuki and Yasuda disclose an interaction method according to claim 1, further comprising:
Suzuki and Yasuda do not explicitly disclose incorporating historical perception data to create a map of the environment.
Zhang in the field of controlling autonomous vehicle discloses a system to predict behavior of environmental objects using machine learning taught using historical environment perception data. See Abstract and Figure 3A.
In particular Zhang discloses before acquiring the map data information of the space where the mobile robot is located and the real-time environmental perception data collected by the environmental perception sensor, acquiring historical environmental perception data collected by the environmental perception sensor when an environment of the space where the mobile robot is located satisfies a preset environmental condition (Zhang at Para. [0065] discloses acquiring historical data about objects:” Historical features of objects 602 in the perception area detected by the ADV are fed into a first neural network 604. The objects may comprise automobiles, bicycles, pedestrians, etc. The historical features of objects may comprise but are not limited to: a location (e.g., coordinates), a speed (magnitude and direction), an acceleration (magnitude and direction), etc. in a number of previous planning cycles (e.g., 10 previous planning cycles).”);
Further, Zhang discloses determining spatial coordinate information of the space where the mobile robot is located according to the historical environmental perception data, and creating a map of the space according to the spatial coordinate information (Zhang at Para. [0069] discloses using historical data to augment spatial information such as position of an object:” encoded data 612 can be concatenated with historical features of the ADV 614 (e.g., a position, a speed, an acceleration) from a number of previous planning cycles (e.g., 10 previous planning cycles), before being fed into a third neural network 616.”);
Zhang, in particular, further discloses determining data information of the map as the map data information (Zhang at Para. [0068] discloses using historical data to augment map information:” the object components 606 and the map information components 608 may be labeled based on the grid subdivision. In other words, the extracted object historical features and the map information (lanes, traffic signals, static objects, etc.) may be labeled with the blocks with which they are associated. Thus, individual components of the input to the second neural network 610, which comprise object components 606 and map information components 608, as described above, may be visualized as stacked layers that are aligned with each other based on the grid.”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the information processor as taught by Suzuki as modified by Yasuda with the neural network trained with historical data in autonomous driving as taught by Zhang with a reasonable expectation of success in order for the one or more process steps to include historical perception data. The teaching suggestion/motivation to combine is that by including historical data, system responsiveness can be improved as taught by Zhang in Paras. [0087]-[0090]
As per claim 3, Suzuki, Yasuda, and Zhang disclose an interaction method according to claim 2, wherein the environmental perception sensor comprises a radar device and a camera device, and the acquiring the real-time environmental perception data collected by the environmental perception sensor (Zhang at Para. [0028] discloses that generally perception systems include cameras, radar and the like:” sensor system 115 includes, but it is not limited to, one or more cameras 211, global positioning system (GPS) unit 212, inertial measurement unit (IMU) 213, radar unit 214, and a light detection and range (LIDAR) unit 215.“) comprises:
acquiring real-time distance information between an obstacle and the mobile robot collected by the radar device (Zhang at Para. [0046] discloses acquiring distance information:” Based on a decision for each of the objects perceived, planning module 305 plans a path or route for the autonomous vehicle, as well as driving parameters (e.g., distance, speed, and/or turning angle), using a reference line provided by routing module 307 as a basis.”);
acquiring real-time obstacle identification information, road surface shape information of the road surface around the mobile robot, and real-time obstacle distribution information of the road surface around the mobile robot collected by the camera device (Zhang at Para. [0077] discloses the use of a grid pattern to locate objects with distribution of objects within the grid:” the extracted historical features of the one or more objects and the map information are labeled with associated block information based on a grid subdivision of a rectangular perception area of the ADV, the grid subdivision comprising subdividing the rectangular perception area of the ADV into a plurality of uniformly sized rectangular blocks based on a grid.”);
determining the real-time obstacle identification information and the real-time distance information as the real-time obstacle information, and determining the road surface shape information and the real-time obstacle distribution information as the real-time indication information (Zhang at Para. [0041] road shape and distribution objects are used to locate objects within a frame:” perception can include the lane configuration, traffic light signals, a relative position of another vehicle, a pedestrian, a building, crosswalk, or other traffic related signs (e.g., stop signs, yield signs), etc., for example, in a form of an object. The lane configuration includes information describing a lane or lanes, such as, for example, a shape of the lane (e.g., straight or curvature), a width of the lane, how many lanes in a road, one-way or two-way lane, merging or splitting lanes, exiting lane, etc.”).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELLIS B. RAMIREZ whose telephone number is (571)272-8920. The examiner can normally be reached 7:30 am to 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ELLIS B. RAMIREZ/Examiner, Art Unit 3658