DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over UENO et al. (WO 2020235518 A1), referred herein as UENO (from IDS, cited US 20220262236 A1 as the English Translation of UENO) in view of KUBOTA et al. (US 20200012097 A1), referred herein as KUBOTA.
Regarding Claim 1, UENO in view of KUBOTA teaches an assist information providing method in an information processing device, the information processing device communicating with a display terminal displaying an image on a user's field of view to provide assist information to be displayed on the display terminal, the method comprising (UENO Abst: The present invention relates to a pedestrian device and a traffic safety assistance method which can effectively and properly support pedestrian's safety confirmation by utilizing vehicle-to-pedestrian communications and an AR device; [0031] a field-of-view camera for capturing the user's field of view):
acquiring an image obtained by performing imaging in a moving direction of a user (UENO [0033] acquire position data of a point of regard of the user from an image captured by the line-of-sight camera; [0088] the pedestrian terminal 1 may be configured to allow a user to operate the screen of the virtual smartphone through the movement of the user's eyes, based on the position data of the user's point of regard acquired from images captured by the line-of-sight camera 25 in the point of regard detection operation P8);
UENO disclosed detecting that the user has gotten in a vehicle as a driver (see [0041]), but does not explicitly teaches detecting an object. However, KUBOTA teaches
detecting an object included in the image (KUBOTA [0040] when alert target detection sensor 21 is a camera, a target object (display item) is recognized by subjecting a foreground image of the vehicle that is a sensed result to image processing such as pattern matching);
UENO in view of KUBOTA further teaches
determining a position of the object in the user's field of view (KUBOTA [0040] alert target detection sensor 21 may specify position of each target object (display item) by relative position with respect to vehicle 300, or may specify the position by absolute position using positioning information obtained by global positioning system (GPS)) as a superimposition position of assist information regarding the object (UENO [0071] The AR display 26 implements an AR (Augmented Reality) by overlaying a virtual object on a real space in a user's real field of view; [0113] the pedestrian terminal 1 displays a mark image 52 (arrow mark image) representing the current position of a pedestrian having a risk of collision); and
transmitting information about the superimposition position to the display terminal to cause the display terminal to superimpose and display the assist information on the user's field of view at timing corresponding to the superimposition position (UENO [0091] When pedestrian information should be transmitted to an in-vehicle terminal (Yes in ST102), the ITS communication device 21 transmit a message containing the pedestrian information (such as pedestrian's ID and position data) to the in-vehicle terminal through pedestrian-to-vehicle communications (ST103); [0094] When determining that the user is looking at the virtual smartphone (Yes in ST114), the processor 32 controls the pedestrian terminal 1 to perform a predetermined alert operation to the user (ST115). Specifically, the pedestrian terminal 1 displays an alert screen on the virtual smartphone as the alert operation).
KUBOTA disclosed a HUD device that allows an occupant of a moving body to view a virtual image by projecting an image on a display medium, therefore is an analogous art.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified UENO to incorporate the teachings of KUBOTA, and applying object and its position detecting method into the pedestrian device and a traffic safety assistance method.
Doing so would provide information to an occupant, etc. on a moving body such as a vehicle even when positional information related to display item temporarily fails to be acquired.
Regarding Claim 2, UENO in view of KUBOTA teaches the assist information providing method according to claim 1, and further teaches wherein, when the superimposition position is located in a first visual field region being an outer peripheral portion of the user's field of view (UENO [0113] the pedestrian terminal 1 displays a mark image 52 (arrow mark image) representing the current position of a pedestrian having a risk of collision. As a result, even when a user cannot quickly recognize the pedestrian having the risk of collision because of an out-of-sight condition such as an intersection, the pedestrian terminal 1 can guide the line of sight of the user (driver) to the pedestrian having the risk of collision, enabling the user to quickly recognize the pedestrian having the risk of collision), the assist information is superimposed and displayed on the user's field of view at timing earlier than timing when the superimposition position is a second visual field region being a central part of the user's field of view (KUBOTA [0025] This makes the HUD device specify display position of display item on the basis of display item information acquired last (e.g., previous time, etc.) even when information for display item (display item information) temporarily fails to be acquired, which can lead to adequate display for display item; [0051] FIG. 8A and FIG. 8B can be shown at the same time. By the display, from point of view E1 of the driver on vehicle 300 at which a reflection image (virtual image) of an image projected on predetermined region D1 is viewed, right turn mark 51, lane marks 52, 53, pedestrian warning mark 54, speed meter information 61, rotation speed meter information 62, speed limit mark 63, and preceding vehicle approaching alarm mark 64 are viewed as if they existed on HUD screen I1 at respective positions illustrated in FIG. 8A and FIG. 8B; [0052] FIG. 9A and FIG. 9B show a position of virtual HUD screen I1 in front of vehicle 300. HUD screen I1 is recognized by the driver to appear at, for example, a position that is a focal point of visual line during driving (e.g., 2 m to 3 m front); FIG. 9B: 54a).
Regarding Claim 3, UENO in view of KUBOTA teaches the assist information providing method according to claim 1, and further teaches wherein the assist information is superimposed and displayed on the user's field of view at earlier timing as the superimposition position deviates from the center of the user's field of view (UENO [0124] In the AR display control operation P5, the processor 32 displays the mark image 51 of the point of regard in AR based on the rendering data of the point of regard acquired in the point of regard rendering operation P34. The processor 32 also displays the mark image 52 (arrow image) pointing to the current position of the pedestrian in AR based on the rendering data of the current position of the pedestrian acquired in the current position rendering operation P35).
Regarding Claim 4, UENO in view of KUBOTA teaches the assist information providing method according to claim 1, and further teaches further comprising:
acquiring a line-of-sight direction of the user and position information about the user (UENO [0084] In the point of regard detection operation P8, the processor 32 detects the user's point of regard based on images captured by the line-of-sight camera 25, and acquires the position data of the user's point of regard; that is, the coordinate values of the point of regard in a coordinate system of the user's field of view); and
determining a position of the object in the user's field of view on the basis of the line-of-sight direction, the position information, and map information (KUBOTA [0038] Predetermined region D1 is a region corresponding to a constant angular field (constant viewing angle) in eyesight of occupant (driver) looking front side of vehicle 300; [0041] Navigation device 22 includes a GPS receiver, and has a vehicle navigation function based on positioning information obtained by GPS and map data. Navigation device 22 may include, for example, a memory, a storage device such as a hard disc device, and a transmitting and receiving device, or the like for acquiring map data from outside by communication to store it. Navigation device 22 can measure present position of vehicle 300 using GPS and calculate traveling direction of the vehicle using the present position and position of vehicle 300 measured in the past. Furthermore, navigation device 22 recognizes a target object (display item) within 100 m in front of vehicle 300 in the traveling direction on the basis of map data, and outputs information such as content and position of each display item as recognition result).
Regarding Claim 5, UENO in view of KUBOTA teaches the assist information providing method according to claim 1, and further teaches wherein the assist information is superimposed and displayed on the user's field of view at timing corresponding to characteristic information about the user (UENO [0132] the pedestrian terminal 1 can estimate the viewer-to-target distance based on the characteristics that an increase in the viewer-to-target distance decreases the convergence angle at which the directions of lines of sight from the left and right eyes intersect, and a decrease in the viewer-to-target distance increases the convergence angle; FIG. 14(B). ST134: alter control (display POR, curr position in AR)).
Regarding Claim 6, UENO in view of KUBOTA teaches the assist information providing method according to claim 1, and further teaches further comprising selecting the assist information to be displayed in accordance with the detected object (UENO [0113] In the present embodiment, the pedestrian terminal 1 displays a mark image 52 (arrow mark image) representing the current position of a pedestrian having a risk of collision; [0114] In the example shown in FIG. 10, the pedestrian terminal 1 displays the mark image 41 representing the collision point in AR).
Regarding Claim 7, UENO in view of KUBOTA teaches the assist information providing method according to claim 1, and further teaches further comprising:
determining timing at which the assist information is displayed in the user's field of view, the timing corresponding to the superimposition position (UENO [0106] In the collision scene prediction operation P23, the processor 32 predicts a collision scene … the processor 32 acquires position data (three-dimensional data) of the pedestrian and that of the vehicle each time at unit intervals of time (e.g., one second) during the time period from the present time to the collision is predicted to occur. In this processing operation, the processor 32 calculates the position of the pedestrian at each time based on the current position and moving speed of the pedestrian, and also calculates the position of the vehicle at each time based on the current position and moving speed of the vehicle); and
transmitting, to the display terminal, information about the timing corresponding to the superimposition position together with the information about the superimposition position (UENO [0108] In the AR display control operation P5, the processor 32 displays the mark image 41 of the collision point in AR based on the 3D rendering data of the collision point acquired in the collision point rendering operation P22. The processor 32 also displays the simulated image 43 representing the vehicle in AR based on the 3D rendering data acquired in the collision scene rendering operation P24; that is, the 3D rendering data of the pedestrian and the vehicle acquired at each time in the collision scene).
Regarding Claim 8, UENO in view of KUBOTA teaches a non-transitory computer-readable recording medium on which programmed instructions are recorded, the instructions causing a computer to execute processing, the computer being included in a display terminal communicating with an information processing device providing assist information, the display terminal superimposing and displaying the assist information on a user's field of view, the processing to be executed by the computer comprising (UENO Abst: The present invention relates to a pedestrian device and a traffic safety assistance method which can effectively and properly support pedestrian's safety confirmation by utilizing vehicle-to-pedestrian communications and an AR device; [0031] a field-of-view camera for capturing the user's field of view; [0075] The memory 31 stores programs executable by the processor 32, and other information):
The metes and bounds of the claim substantially correspond to the limitations set forth in claim 1; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Samantha (YUEHAN) WANG/
Primary Examiner
Art Unit 2617