Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12-11-2025 has been entered.
The official correspondence below is a first action non-final on an RCE.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09-03-2025 and 01-13-2026 have been considered by the examiner.
Response to Amendment
Claims 1-15 have been canceled.
Claims 16-28 are new.
Claims 16-28 are currently pending.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 16-21 and 25-27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20190180485 A1) (hereinafter Kim-485) in view of Yamada (US 20170200048 A1) and in further view of Kim (US 20180066956 A1) (hereinafter Kim-956).
REGARDING CLAIM 16, Kim-485 discloses, a communication unit (Kim-485: [0078] The user interface apparatus 200 may receive a user input and provide information generated in the vehicle 100 to the user; [0081] Data collected in the input unit 120 may be analyzed by the processor 270 and processed as a user's control command; [FIG. 7(440), 8(800), 9(S920)]; [0076] As illustrated in FIG. 7, the vehicle 100 may include a user interface apparatus 200; [0156-0161]) configured to communicate with a system (Kim-485: [0257] The communication unit 810 receives information relating to a destination from the external apparatus. Pieces of information relating to the destination can include an image that is obtained by image-capturing the destination, a location of the destination, a type of the destination, information relating to a building (for example, a structure of the building and information on shops on each floor of the building) when the destination is within the building, and information relating to a parking lot in or corresponding to the destination. [0258] In addition, the communication unit 810 may include various pieces of information, such as information relating to a building that is present within a fixed distance from the vehicle, information relating to a vacant lot, and information relating to a parking lot, from the external apparatus; [0310]) for providing a digital signage platform (Kim-485: [0310-0312] ... the information relating to the destination is stored in a memory, or is received from an external apparatus (for example, a mobile terminal, another vehicle, the Internet, a server, or the like) through the communication unit 810 ... Based on the preset destination being identified from the image, the processor 870 outputs a graphic object to the display unit 830 so the graphic object is superimposed on the destination); a display configured to display a driving image of the vehicle acquired through a vision sensor (Kim-485: [0310-0312] ... the information relating to the destination is stored in a memory, or is received from an external apparatus (for example, a mobile terminal, another vehicle, the Internet, a server, or the like) through the communication unit 810 ... Based on the preset destination being identified from the image, the processor 870 outputs a graphic object to the display unit 830 so the graphic object is superimposed on the destination); and a control unit (Kim-485: [FIG. 7(170)]) configured to: transmit to the system, via the communication unit, a request for point of interest (POI) information associated with sensing data of the vehicle (Kim-485: [0078] The user interface apparatus 200 may receive a user input and provide information generated in the vehicle 100 to the user; [0081] Data collected in the input unit 120 may be analyzed by the processor 270 and processed as a user's control command; [0330] Pieces of specific information 1000 include GPS information, vehicle-to-everything (V2X) information, navigation information, big data, map data, information to a vehicle that is measured in an inertial measurement unit (IMU), information that is provided by an advanced driver assist system (ADAS), and information from the third party. [0331] Using the specific information 1000, the processor 870 acquires a current location (GPS) of the vehicle, a visual traveling record (a visual odometry), a location of a destination (a point-of-interest (POI) GPS), a camera-captured image, a preference level of a POI in the vicinity of the vehicle, and meta data on a destination or the POI; [0430] For example, as illustrated in FIG. 23A, when a preset destination 2400a is identified from the image that is received through the camera, the processor 870 outputs a plurality of graphic objects, graphic objects 2400a, 2400c, and 2400d, which indicate a plurality of destinations, respectively, that are included in the same category as the preset destination 2400a, to the display unit 830. [0431] At this time, as illustrated in FIG. 23B, the processor 870 outputs the plurality of graphic objects, the graphic objects 2400b, 2400c, and 2400d, which indicate the plurality of destinations, respectively, that are included in the same category as the preset destination 2400a, to the display unit 830, so the graphic objects 2400b, 2400c, and 2400d are superimposed on the plurality of destinations, respectively (or is superimposed on a building including the plurality of destinations); [FIG. 23ABC]); recognize a spatial location of a building area (Kim-485: [0024] In the vehicle control device according to an embodiment, the processor can output a graphic object relating to the parking lot in a first way to the first display unit, when the distance between the building including the destination and the vehicle is a first distance, and may output the graphic object relating to the parking lot, in a second way that is different from the first way, to the display unit, when the distance between the building including the destination and the vehicle is a second distance that is shorter than the first distance; [0258]; [0335-0338]) including at least a plurality of pieces of POI information detected by the system (Kim-485: [0026] In the vehicle control device according to an embodiment, when the vehicle travels into an area that is at a fixed distance from the building including the destination, the processor can output a first graphic object relating to a parking lot in the building including the destination and a second object relating to another parking lot that is present within a fixed distance from the parking lot in the building, to the display unit, so the first and second graphic objects are superimposed on a road on which the vehicle is traveling; [0258]; [0335-0338]); display augmented reality (AR) image (Kim-485: [0037] output in an augmented reality head up display AR-HUD way) corresponding to the plurality of pieces of POI information (Kim-485: [0019] when the destination is a vacant lot or a parking lot, the processor can output a wall-shaped graphic object so the wall-shaped graphic object is superimposed on the vacant lot, in order to identify a border of the vacant lot; [FIG. 12A, 13A, 13B]) on a display area corresponding to the spatial location of the building area (Kim-485: [0024] In the vehicle control device according to an embodiment, the processor can output a graphic object relating to the parking lot in a first way to the first display unit, when the distance between the building including the destination and the vehicle is a first distance, and may output the graphic object relating to the parking lot, in a second way that is different from the first way, to the display unit, when the distance between the building including the destination and the vehicle is a second distance that is shorter than the first distance; [0258]; [0335-0338]) in an order corresponding to a floor-by-floor basis of the building area (Kim-485: [0257] information relating to the destination can include an image that is obtained by image-capturing the destination, a location of the destination, a type of the destination, information relating to a building (for example, a structure of the building and information on shops on each floor of the building) when the destination is within the building, and information relating to a parking lot in or corresponding to the destination; [FIG. 12A, 13A, 13B]; [0373] In addition, as illustrated in FIG. 13A, when the entire building including the destination is identified from the image that is received through the camera, the processor 870 outputs the information 1330a (the destination information) relating to the destination to the display unit 830 so the information 1330a is superimposed on the vicinity of the building or only a portion of the building ... [0375] In addition, as illustrated in FIG. 13B, when only a portion of the building including the destination is identified from the image, the processor 870 outputs information 1330b relating to the destination, to a position that corresponds to a place in the building, where the destination is positioned); recognize an object located near the vehicle based on sensing data of the vehicle (Kim-485: [0325] when the vehicle travels near the destination 1100, a size of the destination 1100 is enlarged when viewed from the driver (or the camera) … the area to which the driver's gaze is fixed when the driver takes a look at the destination is also broadened; [0400] when the distance between the vehicle and the destination is the first distance, the processor 870 outputs the specific-type graphic object 1610a to the display unit 830. When the distance between the vehicle and the destination is the second distance that is shorter than the first distance (that is, when the vehicle travels near the destination), the processor 870 outputs the graphic object 1610b that is formed to correspond to the peripheral edge, to the display unit 830, so the graphic object 1610b is superimposed on the destination); and change a manner in which the AR image is displayed (Kim-485: [0325] when the vehicle travels near the destination 1100, a size of the destination 1100 is enlarged when viewed from the driver (or the camera) … the area to which the driver's gaze is fixed when the driver takes a look at the destination is also broadened; [0400] when the distance between the vehicle and the destination is the first distance, the processor 870 outputs the specific-type graphic object 1610a to the display unit 830. When the distance between the vehicle and the destination is the second distance that is shorter than the first distance (that is, when the vehicle travels near the destination), the processor 870 outputs the graphic object 1610b that is formed to correspond to the peripheral edge, to the display unit 830, so the graphic object 1610b is superimposed on the destination).
The examiner respectfully submits, to the examiner’s best understanding, Kim-485 discloses, display augmented reality (AR) image corresponding to the plurality of pieces of POI information on a display area corresponding to the spatial location of the building area in an order corresponding to a floor-by-floor basis of the building area (Kim-485: [0019, 0037, 0024, 0258, 0335-0338, 0257, 0373-0375).
However, should it be found Kim-485 alone fails to disclose, display augmented reality (AR) image corresponding to the plurality of pieces of POI information on a display area corresponding to the spatial location of the building area in an order corresponding to a floor-by-floor basis of the building area, in the same field of endeavor, Yamada discloses, display augmented reality (AR) image corresponding to the plurality of pieces of POI information on a display area corresponding to the spatial location of the building area in an order corresponding to a floor-by-floor basis of the building area (Yamada: [0007] virtual display information corresponding to a captured image of a building can be superimposed on the captured image, in a building which has a plurality of stories (or floors) and in which different stores (or establishments) exist on each story), for the benefit of a user easily ascertaining virtual display information is of a floor on which an establishment exists.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the display disclosed by Kim-485 to include floor-by-floor displaying taught by Yamada. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to easily ascertain virtual display information is of a floor on which an establishment exists.
Kim-485, as modified, discloses changing a manner in which the AR image is displayed (Kim-485: [0325]; [0400]). Kim-485, as modified, does not explicitly disclose, change a manner in which the AR image is displayed in response to determination that the object is determined to interfere with driving of the vehicle based on the sensing data.
However, in the same field of endeavor, Kim-956 discloses, change a manner in which the AR image is displayed in response to determination that the object is determined to interfere with driving of the vehicle based on the sensing data (Kim-956: [0479] At this point, the augmented reality image 1930 is produced with the light projected onto the windshield (WS). In this case, from a point of view of a driver, the augmented reality image may be seen as being displayed not within the display area 1920, but beyond the display area 1920 and outside the vehicle 100. That is, the augmented reality image 1930 may be perceived as a virtual image hovering in the air at a predetermined distance ahead of the vehicle 100. For example, the augmented reality image 1930 may be a graphic object which provides information on the boundary of the object 1901, speeds, collision warning, etc), for the benefit of warning an operator of a possible collision.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the display disclosed by a modified Kim-485 to include a collision warning taught by Kim-956. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to warn an operator of a possible collision.
REGARDING CLAIM 17, Kim-485, as modified, remains as applied above to claim 16. Further, Kim-485 also discloses, transmit a first request (Kim-485: [0251] The vehicle control device 800 and the mobile terminal are wirelessly connected to each other according to a user's request; [0303] Thus, based on the preset condition being satisfied, the processor 870 activates (or turns on) the camera, and receives an image (for example, a preview image or a real-time image) through the camera. [0304] Subsequently, according to the present disclosure, based on the preset destination being identified from the image, a step of outputting a graphic object to the display unit 830 so the graphic object is superimposed on the destination proceeds (S920). [0305] Specifically, when a destination is set according to the user's request) for generating augmented reality (AR) digital signage (Kim-485: [0304] the graphic object is superimposed on the destination) corresponding to the plurality of pieces of POI information (Kim-485: [0019] when the destination is a vacant lot or a parking lot, the processor can output a wall-shaped graphic object so the wall-shaped graphic object is superimposed on the vacant lot, in order to identify a border of the vacant lot; [FIG. 12A, 13A, 13B]) on a floor-by-floor basis of the building area based on the sensing data of the vehicle (Kim-485: [0257] information relating to the destination can include an image that is obtained by image-capturing the destination, a location of the destination, a type of the destination, information relating to a building (for example, a structure of the building and information on shops on each floor of the building) when the destination is within the building, and information relating to a parking lot in or corresponding to the destination; [FIG. 12A, 13A, 13B]; [0373] In addition, as illustrated in FIG. 13A, when the entire building including the destination is identified from the image that is received through the camera, the processor 870 outputs the information 1330a (the destination information) relating to the destination to the display unit 830 so the information 1330a is superimposed on the vicinity of the building or only a portion of the building ... [0375] In addition, as illustrated in FIG. 13B, when only a portion of the building including the destination is identified from the image, the processor 870 outputs information 1330b relating to the destination, to a position that corresponds to a place in the building, where the destination is positioned), the spatial location of the building area (Kim-485: [0024] In the vehicle control device according to an embodiment, the processor can output a graphic object relating to the parking lot in a first way to the first display unit, when the distance between the building including the destination and the vehicle is a first distance, and may output the graphic object relating to the parking lot, in a second way that is different from the first way, to the display unit, when the distance between the building including the destination and the vehicle is a second distance that is shorter than the first distance; [0258]; [0335-0338]), and floor number information for each piece of POI information (Kim-485: [0257] information relating to the destination can include an image that is obtained by image-capturing the destination, a location of the destination, a type of the destination, information relating to a building (for example, a structure of the building and information on shops on each floor of the building) when the destination is within the building, and information relating to a parking lot in or corresponding to the destination); display AR digital signage (Kim-485: [FIG. 12(AB)]; [FIG. 13 (AB)]; [FIG. 14 (ABC)]) corresponding to a result of the first request to (Kim-485: [0251] The vehicle control device 800 and the mobile terminal are wirelessly connected to each other according to a user's request; [0303] Thus, based on the preset condition being satisfied, the processor 870 activates (or turns on) the camera, and receives an image (for example, a preview image or a real-time image) through the camera. [0304] Subsequently, according to the present disclosure, based on the preset destination being identified from the image, a step of outputting a graphic object to the display unit 830 so the graphic object is superimposed on the destination proceeds (S920). [0305] Specifically, when a destination is set according to the user's request) be mapped to the building area (Kim-485: [FIG. 12(AB)]; [FIG. 13 (AB)]; [FIG. 14 (ABC)]; [0257] information relating to the destination can include an image that is obtained by image-capturing the destination, a location of the destination, a type of the destination, information relating to a building (for example, a structure of the building and information on shops on each floor of the building) when the destination is within the building, and information relating to a parking lot in or corresponding to the destination) wherein each piece of POI information of the AR digital signage is displayed to be mapped to a corresponding floor of the building area (Kim-485: [FIG. 12(AB)]; [FIG. 13 (AB)]; [FIG. 14 (ABC)]; [0257] information relating to the destination can include an image that is obtained by image-capturing the destination, a location of the destination, a type of the destination, information relating to a building (for example, a structure of the building and information on shops on each floor of the building) when the destination is within the building, and information relating to a parking lot in or corresponding to the destination) based on a location (Kim-485: examiner: see at least [0024-0026] based on "distance") and a driving direction of the vehicle (Kim-485: [0325] when the vehicle travels near the destination 1100, a size of the destination 1100 is enlarged when viewed from the driver (or the camera). Accordingly, the area to which the driver's gaze is fixed when the driver takes a look at the destination is also broadened, and thus the area of the display unit 830 (the windshield), through which the driver's gaze passes, is also broadened; [0400] In addition, when the distance between the vehicle and the destination is the first distance, the processor 870 outputs the specific-type graphic object 1610a to the display unit 830. When the distance between the vehicle and the destination is the second distance that is shorter than the first distance (that is, when the vehicle travels near the destination), the processor 870 outputs the graphic object 1610b that is formed to correspond to the peripheral edge, to the display unit 830, so the graphic object 1610b is superimposed on the destination), transmit a second request for generating an AR carpet (Kim-485: [0353] when the distance between the vehicle and the preset destination 1100 is a second distance (for example, 100 m to 200 m) that is shorter than the first distance ... displays a second-type graphic object (for example, as illustrated in FIG. 11B(b), a graphic carpet 1150 expressing a path for the vehicle to travel on up to the destination) that is different from the fist-type graphic object 1120, or turn-by-turn navigation information 1160 on the display unit; [0382] The first and second graphic carpets 1432a and 1432b include information relating to the parking lots, respectively, and are output to the display unit 830 so the first and second graphic carpets 1432a and 1432b are superimposed on a road on which the vehicle is to travel; [0424] a graphic object 2340 that guides the vehicle to travel along a path to another parking lot, to the display unit 830 so the graphic objects 2330 and 2340 are superimposed on the road (examiner: carpet) ... [0425] when priority levels of parking lots are set based on a driver preference (for example, the time to the destination, a toll, or the like), the processor 870 further outputs pieces of information relating to high priority-level parking lots other than the parking lot that is currently identified from the image (examiner: carpet arrangement based upon a driver preference)) for display on the driving image for guiding entry into the building area (Kim-485: [0353] when the distance between the vehicle and the preset destination 1100 is a second distance (for example, 100 m to 200 m) that is shorter than the first distance ... displays a second-type graphic object (for example, as illustrated in FIG. 11B(b), a graphic carpet 1150 expressing a path for the vehicle to travel on up to the destination) that is different from the fist-type graphic object 1120, or turn-by-turn navigation information 1160 on the display unit; [0382] The first and second graphic carpets 1432a and 1432b include information relating to the parking lots, respectively, and are output to the display unit 830 so the first and second graphic carpets 1432a and 1432b are superimposed on a road on which the vehicle is to travel; [0424] a graphic object 2340 that guides the vehicle to travel along a path to another parking lot, to the display unit 830 so the graphic objects 2330 and 2340 are superimposed on the road (examiner: carpet) ... [0425] when priority levels of parking lots are set based on a driver preference (for example, the time to the destination, a toll, or the like), the processor 870 further outputs pieces of information relating to high priority-level parking lots other than the parking lot that is currently identified from the image (examiner: carpet arrangement based upon a driver preference)) based on driving-related information included in the sensing data of the vehicle (Kim-485: [0325] when the vehicle travels near the destination 1100, a size of the destination 1100 is enlarged when viewed from the driver (or the camera). Accordingly, the area to which the driver's gaze is fixed when the driver takes a look at the destination is also broadened, and thus the area of the display unit 830 (the windshield), through which the driver's gaze passes, is also broadened; [0400] In addition, when the distance between the vehicle and the destination is the first distance, the processor 870 outputs the specific-type graphic object 1610a to the display unit 830. When the distance between the vehicle and the destination is the second distance that is shorter than the first distance (that is, when the vehicle travels near the destination), the processor 870 outputs the graphic object 1610b that is formed to correspond to the peripheral edge, to the display unit 830, so the graphic object 1610b is superimposed on the destination; [0444] The processor 870 further includes at least one of map information 2610 that guides the vehicle to drive into the parking lot in the building including the destination, information relating to the parking lot in the building including the destination and information relating to a nearby parking lot, in the image that results from image-capturing the destination); display an AR carpet corresponding to a result of the second request (Kim-485: [0353] when the distance between the vehicle and the preset destination 1100 is a second distance (for example, 100 m to 200 m) that is shorter than the first distance ... displays a second-type graphic object (for example, as illustrated in FIG. 11B(b), a graphic carpet 1150 expressing a path for the vehicle to travel on up to the destination) that is different from the fist-type graphic object 1120, or turn-by-turn navigation information 1160 on the display unit; [0382] The first and second graphic carpets 1432a and 1432b include information relating to the parking lots, respectively, and are output to the display unit 830 so the first and second graphic carpets 1432a and 1432b are superimposed on a road on which the vehicle is to travel; [0424] a graphic object 2340 that guides the vehicle to travel along a path to another parking lot, to the display unit 830 so the graphic objects 2330 and 2340 are superimposed on the road (examiner: carpet) ... [0425] when priority levels of parking lots are set based on a driver preference (for example, the time to the destination, a toll, or the like), the processor 870 further outputs pieces of information relating to high priority-level parking lots other than the parking lot that is currently identified from the image (examiner: carpet arrangement based upon a driver preference)) on a road in the driving image to be provided as a display area (Kim-485: [0353] when the distance between the vehicle and the preset destination 1100 is a second distance (for example, 100 m to 200 m) that is shorter than the first distance ... displays a second-type graphic object (for example, as illustrated in FIG. 11B(b), a graphic carpet 1150 expressing a path for the vehicle to travel on up to the destination) that is different from the fist-type graphic object 1120, or turn-by-turn navigation information 1160 on the display unit; [0382] The first and second graphic carpets 1432a and 1432b include information relating to the parking lots, respectively, and are output to the display unit 830 so the first and second graphic carpets 1432a and 1432b are superimposed on a road on which the vehicle is to travel; [0424] a graphic object 2340 that guides the vehicle to travel along a path to another parking lot, to the display unit 830 so the graphic objects 2330 and 2340 are superimposed on the road (examiner: carpet) ... [0425] when priority levels of parking lots are set based on a driver preference (for example, the time to the destination, a toll, or the like), the processor 870 further outputs pieces of information relating to high priority-level parking lots other than the parking lot that is currently identified from the image (examiner: carpet arrangement based upon a driver preference)) and change from mapping the AR digital signage on the building area to mapping the AR digital signage on the display area of the AR carpet (Kim-485: see [FIG. 11b(a)] and [FIG. 11b(b)] for from building to carpet; [0352] At this time, when a distance between the vehicle and the preset destination 1100 is a first distance (for example, 200 m to 500 m), the processor 870 further displays a first-type graphic object 1120 (for example, as illustrated in FIG. 11B(a), map information that includes information on a path for the vehicle to travel on from a current location of the vehicle to the destination) on the display unit 830. [0353] In addition, when the distance between the vehicle and the preset destination 1100 is a second distance (for example, 100 m to 200 m) that is shorter than the first distance, the processor 870 further displays a second-type graphic object (for example, as illustrated in FIG. 11B(b), a graphic carpet 1150 expressing a path for the vehicle to travel on up to the destination) that is different from the fist-type graphic object 1120, or turn-by-turn navigation information 1160 on the display unit 830) based on the vehicle satisfying conditions for determining that the vehicle is to enter the building area (Kim-485: [0325] when the vehicle travels near the destination 1100, a size of the destination 1100 is enlarged when viewed from the driver (or the camera). Accordingly, the area to which the driver's gaze is fixed when the driver takes a look at the destination is also broadened, and thus the area of the display unit 830 (the windshield), through which the driver's gaze passes, is also broadened; [0400] In addition, when the distance between the vehicle and the destination is the first distance, the processor 870 outputs the specific-type graphic object 1610a to the display unit 830. When the distance between the vehicle and the destination is the second distance that is shorter than the first distance (that is, when the vehicle travels near the destination), the processor 870 outputs the graphic object 1610b that is formed to correspond to the peripheral edge, to the display unit 830, so the graphic object 1610b is superimposed on the destination; [0444] The processor 870 further includes at least one of map information 2610 that guides the vehicle to drive into the parking lot in the building including the destination, information relating to the parking lot in the building including the destination and information relating to a nearby parking lot, in the image that results from image-capturing the destination), wherein the POI information of the AR digital signage is mapped to the display area of the AR carpet (Kim-485: [0353] when the distance between the vehicle and the preset destination 1100 is a second distance (for example, 100 m to 200 m) that is shorter than the first distance ... displays a second-type graphic object (for example, as illustrated in FIG. 11B(b), a graphic carpet 1150 expressing a path for the vehicle to travel on up to the destination) that is different from the fist-type graphic object 1120, or turn-by-turn navigation information 1160 on the display unit; [0382] The first and second graphic carpets 1432a and 1432b include information relating to the parking lots, respectively, and are output to the display unit 830 so the first and second graphic carpets 1432a and 1432b are superimposed on a road on which the vehicle is to travel; [0424] a graphic object 2340 that guides the vehicle to travel along a path to another parking lot, to the display unit 830 so the graphic objects 2330 and 2340 are superimposed on the road (examiner: carpet) ... [0425] when priority levels of parking lots are set based on a driver preference (for example, the time to the destination, a toll, or the like), the processor 870 further outputs pieces of information relating to high priority-level parking lots other than the parking lot that is currently identified from the image (examiner: carpet arrangement based upon a driver preference)) in an order corresponding to the floor-by-floor basis of the building area using the results of the first request (Kim-485: [0257] information relating to a building (for example, a structure of the building and information on shops on each floor of the building) when the destination is within the building, and information relating to a parking lot in or corresponding to the destination; [0342] for example, a floor on which the destination is positioned, opening hours, a preference level, a grade average, an evaluation report, a destination mark, and parking-lot information; [0385] that includes information indicating an available parking space on each floor of the parking lot, to the display; see for superimposing information floor-by-floor near entrance [FIG. 14 ABC] ; carpet display based on price and distance see [FIG. 22]), wherein whether the vehicle satisfies the conditions is determined based on at least a current location of the vehicle, driving-related information of the vehicle, characteristics of the building area, a correlation between the building area and a destination, or user preference (Kim-485: examiner: see at least [0024-0026] based on "distance" and a plurality of thresholds, which implies the distance is shortening and the vehicle's heading is toward the destination/parking; [0325] when the vehicle travels near the destination 1100, a size of the destination 1100 is enlarged when viewed from the driver (or the camera). Accordingly, the area to which the driver's gaze is fixed when the driver takes a look at the destination is also broadened, and thus the area of the display unit 830 (the windshield), through which the driver's gaze passes, is also broadened; [0400] In addition, when the distance between the vehicle and the destination is the first distance, the processor 870 outputs the specific-type graphic object 1610a to the display unit 830. When the distance between the vehicle and the destination is the second distance that is shorter than the first distance (that is, when the vehicle travels near the destination), the processor 870 outputs the graphic object 1610b that is formed to correspond to the peripheral edge, to the display unit 830, so the graphic object 1610b is superimposed on the destination; [0444] The processor 870 further includes at least one of map information 2610 that guides the vehicle to drive into the parking lot in the building including the destination, information relating to the parking lot in the building including the destination and information relating to a nearby parking lot, in the image that results from image-capturing the destination).
The examiner respectfully submits, Kim-485 discloses, transmit a first request for generating augmented reality (AR) digital signage corresponding to the plurality of pieces of POI information on a floor-by-floor basis of the building area based on the sensing data of the vehicle (Kim-485: see above).
However, should it be found that Kim-485 alone fails to disclose transmit a first request for generating augmented reality (AR) digital signage corresponding to the plurality of pieces of POI information on a floor-by-floor basis of the building area based on the sensing data of the vehicle, in the same field of endeavor, Yamada discloses, transmit a first request (Yamada: [0029] a search condition based on a search request from the user terminal 200; [0076] a transmission request for detailed store information is received from the user terminal 200) for generating augmented reality (AR) digital signage (Yamada: [0030] generate AR information for displaying information relating to the restaurant on the captured image in a superimposed manner on the user terminals 200, and can transmit the AR information to the user terminal 200) corresponding to the plurality of pieces of POI information (Yamada: [0007] virtual display information corresponding to a captured image of a building can be superimposed on the captured image, in a building which has a plurality of stories (or floors) and in which different stores (or establishments) exist on each story) on a floor-by-floor basis of the building area based on the sensing data of the vehicle (Yamada: [0007] virtual display information corresponding to a captured image of a building can be superimposed on the captured image, in a building which has a plurality of stories (or floors) and in which different stores (or establishments) exist on each story), for the benefit of a user easily ascertaining virtual display information is of a floor on which an establishment exists.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the display disclosed by a modified Kim-485 to include a collision warning taught by Kim-956. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to warn an operator of a possible collision.
REGARDING CLAIM 18, Kim-485, as modified, remains as applied above to claim 17. Further, Kim-485 also discloses, the AR carpet is displayed in the driving image (Kim-485: [0430] ([FIG. 11A]))) to include guidance route information in response to the vehicle approaching within a certain range from the building area (Kim-485: [0430] ([FIG. 11A]))) with a driving speed less than or equal to a threshold range (Kim-485: [0028] In the vehicle control device according to an embodiment, the processor can output the graphic object in different shapes to the display unit based on a current traveling speed of the vehicle processor can; [0344] based on the location and the shape of the destination at which the driver takes a look being changed due to the vehicle's traveling, the output position and the output shape of the destination information are also caused to be variable).
REGARDING CLAIM 19, Kim-485, as modified, remains as applied above to claim 18. Further, Kim-485 also discloses, determine that the vehicle is about to enter the building area based on the driving-related information of the vehicle (Kim-485: [0368] For example, as illustrated in FIGS. 12A and 12B, when the distance between the vehicle and the destination is short and thus where only a portion of the building is included in the image, the processor 870 outputs a third graphic object 1210 to the display unit 830 so the third graphic object 1210 is superimposed on an entrance 1200 to a parking lot that is included in the building, or outputs a third graphic object 1230 to the display 830 so the third graphic object 1230 is superimposed on the destination 1220 which is included in the building).
REGARDING CLAIM 20, Kim-485, as modified, remains as applied above to claim 19. Further, Kim-485 also discloses, gradually reduce a number of images of the AR digital signage displayed on the building in the driving image on a floor-by-floor basis (Kim-485: [FIG. 13AB] example of number of images reduced as vehicle approaches building; [FIG. 14ABC] example of number of images reduced as vehicle approaches building) in response to the vehicle approaching the building area (Kim-485: [FIG. 13AB] example of number of images reduced as vehicle approaches building; [FIG. 14ABC] example of number of images reduced as vehicle approaches building; [0325] As an example, when the vehicle travels near the destination 1100, a size of the destination 1100 is enlarged when viewed from the driver (or the camera). Accordingly, the area to which the driver's gaze is fixed when the driver takes a look at the destination is also broadened, and thus the area of the display unit 830 (the windshield), through which the driver's gaze passes, is also broadened; [0400] In addition, when the distance between the vehicle and the destination is the first distance, the processor 870 outputs the specific-type graphic object 1610a to the display unit 830. When the distance between the vehicle and the destination is the second distance that is shorter than the first distance (that is, when the vehicle travels near the destination), the processor 870 outputs the graphic object 1610b that is formed to correspond to the peripheral edge, to the display unit 830, so the graphic object 1610b is superimposed on the destination; [0023] based on a distance between a vehicle and the destination, the processor can output a graphic object relating to a parking lot in the destination, in a preset way to the display unit); and transmit a rendering request to map the AR digital signage to the AR carpet instead at a certain time after the AR carpet is displayed (Kim-485: [FIG. 13AB] example of number of images reduced as vehicle approaches building; [FIG. 14ABC] example of number of images reduced as vehicle approaches building).
REGARDING CLAIM 21, Kim-485, as modified, remains as applied above to claim 17. Further, Kim-485 also discloses, AR digital signage of the floor corresponding to the destination and AR digital signage of other floors are displayed such that they are visually distinct from each other (Kim-485: [0257] a structure of the building and information on shops on each floor of the building; [0341-0342] Subsequently, based on at least one of the following: the camera-captured image, the coordinates of the graphic object for the destination in the camera coordinate system, and the information relating to the destination, the processor 870 (or the AR graphic rendering engine) outputs destination (POI) information so the destination (POI) information is superimposed on the destination (outputs the destination (POI) information to the vicinity of the destination) (S1080). [0342] The destination information is the information relating to the destination, and includes a name of the destination and various pieces of information (for example, a floor on which the destination is positioned, opening hours, a preference level, a grade average, an evaluation report, a destination mark, and parking-lot information) relating to the destination; [FIG. 12A, 13AB, 23A-D]; [0385] At this time, the processor 870 further outputs a graphic object (or a graphic object that is output in a third way) that includes information indicating an available parking space on each floor of the parking lot, to the display unit 830. [0386] In addition, when the parking in the destination is full (that is, there is no parking space available), the processor 870 outputs a graphic object of a first color (for example, a red-based color) so the graphic object is superimposed on the building including the destination. [0387] In addition, if the parking lot in the destination is not full, the processor 870 may output a graphic object of a second color (for example, a blue-based color) that is different from the first color, so the graphic object is superimposed on a building (or a parking lot) in which parking is possible and which is positioned within a fixed distance from the destination).
REGARDING CLAIM 25, Kim-485, as modified, remains as applied above to claim 16. Further, Kim-485 also discloses, remap and display the AR image to an original display area when no object is detected around the vehicle (Kim-485: [0023] based on a distance between a vehicle and the destination, the processor can output a graphic object relating to a parking lot in the destination, in a preset way to the display unit. [0024] In the vehicle control device according to an embodiment, the processor can output a graphic object relating to the parking lot in a first way to the first display unit, when the distance between the building including the destination and the vehicle is a first distance, and may output the graphic object relating to the parking lot, in a second way that is different from the first way, to the display unit, when the distance between the building including the destination and the vehicle is a second distance that is shorter than the first distance; [0325] As an example, when the vehicle travels near the destination 1100, a size of the destination 1100 is enlarged when viewed from the driver (or the camera). Accordingly, the area to which the driver's gaze is fixed when the driver takes a look at the destination is also broadened, and thus the area of the display unit 830 (the windshield), through which the driver's gaze passes, is also broadened; [0400] In addition, when the distance between the vehicle and the destination is the first distance, the processor 870 outputs the specific-type graphic object 1610a to the display unit 830. When the distance between the vehicle and the destination is the second distance that is shorter than the first distance (that is, when the vehicle travels near the destination), the processor 870 outputs the graphic object 1610b that is formed to correspond to the peripheral edge, to the display unit 830, so the graphic object 1610b is superimposed on the destination).
REGARDING CLAIM 26, Kim-485, as modified, remains as applied above to claim 16. Further, Kim-485 also discloses, the AR image is at least one of an AR digital signage and an AR carpet (Kim-485: [0353] In addition, when the distance between the vehicle and the preset destination 1100 is a second distance (for example, 100 m to 200 m) that is shorter than the first distance, the processor 870 further displays a second-type graphic object (for example, as illustrated in FIG. 11B(b), a graphic carpet 1150 expressing a path for the vehicle to travel on up to the destination) that is different from the fist-type graphic object 1120; [0382] The first and second graphic carpets 1432a and 1432b include information relating to the parking lots, respectively, and are output to the display unit 830 so the first and second graphic carpets 1432a and 1432b are superimposed on a road on which the vehicle is to travel. [0383] That is, when the vehicle travels into an area that is a fixed distance (for example, the second distance) from the building including the destination, the processor 870 outputs the first graphic object (or the first graphic carpet) relating to the parking in the building including the destination and the second graphic object (or the second graphic carpet) relating to another parking lot that is present within a fixed distance from the parking lot in the building, to the display unit 830, so the first and second objects are superimposed on a road on which the vehicle is to travel).
REGARDING CLAIM 27, Kim-485, as modified, remains as applied above to claim 16. Further, Kim-485 also discloses, the display area is one of a front side or a lateral side of the building area, or an AR carpet mapped on a road (Kim-485: [0353] In addition, when the distance between the vehicle and the preset destination 1100 is a second distance (for example, 100 m to 200 m) that is shorter than the first distance, the processor 870 further displays a second-type graphic object (for example, as illustrated in FIG. 11B(b), a graphic carpet 1150 expressing a path for the vehicle to travel on up to the destination) that is different from the fist-type graphic object 1120; [0382] The first and second graphic carpets 1432a and 1432b include information relating to the parking lots, respectively, and are output to the display unit 830 so the first and second graphic carpets 1432a and 1432b are superimposed on a road on which the vehicle is to travel. [0383] That is, when the vehicle travels into an area that is a fixed distance (for example, the second distance) from the building including the destination, the processor 870 outputs the first graphic object (or the first graphic carpet) relating to the parking in the building including the destination and the second graphic object (or the second graphic carpet) relating to another parking lot that is present within a fixed distance from the parking lot in the building, to the display unit 830, so the first and second objects are superimposed on a road on which the vehicle is to travel).
Claim(s) 22-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20190180485 A1) (hereinafter Kim-485) in view of Yamada (US 20170200048 A1) and in further view of Kim (US 20180066956 A1) (Kim-956) as applied to claim 16 above, and further in view of Kim (US 20120092369 A1) (hereinafter Kim-369).
REGARDING CLAIM 22, Kim-485, as modified, remains as applied above to claim 16. Further, Kim-485 also discloses, vary a transparency of the AR image displayed on a floor-by-floor basis so that the detected object can be visually recognized (Kim-485: [0104] The transparent display may have adjustable transparency).
However, should it be found Kim-485 alone fails to disclose, vary a transparency of the AR image displayed on a floor-by-floor basis so that the detected object can be visually recognized, in the same field of endeavor, Kim-369 discloses, vary a transparency of the AR image displayed on a floor-by-floor basis so that the detected object can be visually recognized (Kim-369: [0055] the additional data processing unit 116 may differently display an object in the complex area corresponding to the object selected in the list from other objects. The differently displayed selected object may be more prominently displayed with respect to the non-selected objects, for example, by displaying the selected object as a color or transparency different from the non-selected objects), for the benefit of improving visibility of each object by differently displaying each object from the background when providing an augmented reality (AR) service.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the display disclosed by a modified Kim-485 to include varying transparency taught by Kim-369. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to improve visibility of each object by differently displaying each object from the background when providing an augmented reality (AR) service.
REGARDING CLAIM 23, Kim-485, as modified, remains as applied above to claim 16. Further, Kim-485 also discloses, map and display the AR image to another display area of the driving image so that the detected object can be visually recognized (Kim-485: [0335] Subsequently, the processor 870 determines whether or not a distance between the vehicle and the destination (the POI) is equal to or shorter than a fixed distance x (S1030). [0336] When the distance between the vehicle and the destination (POI) is equal to or shorter than the fixed distance x, based on the object that corresponds to the destination (the POI), an image of which can be currently captured by a camera (or which is currently inside of camera view), the processor 870 selects a graphic object (a reference image) that is to be displayed so the graphic object is superimposed on the destination, from the database (S1040). [0337] Subsequently, using the image (a current camera frame) that is currently received through the camera and the selected graphic object (the reference image), the processor 870 matches the graphic object to an area (a POI area) that corresponds to the destination. In addition, the matching of the graphic object to the area that corresponds to the destination, as illustrated in FIG. 11A, means determining an output area on which the graphic object is displayed so the graphic object is superimposed on the destination when the driver takes a look at the destination. [0338] Further, when the distance between the vehicle and the destination (the POI) is longer than the fixed distance x, the processor 870 determines a location of the destination (or displays the location of the destination on the display unit 830), by performing interpolation of GPS information (information on the location of the vehicle and information on the location of the destination) and data (information) of the visual odometry and coordinate conversion of the camera image. The location of the determined destination includes coordinates of the graphic object for the destination (the POI) in a camera coordinate system. [0339] Subsequently, the coordinates of the graphic object for the destination (the POI) in the camera coordinate system and the information (meta data) relating to the destination are transmitted to an AR graphic rendering engine that is included in the processor 870 (or the vehicle control device 800) (S1070). Using the coordinates of the graphic object for the destination in the camera coordinate system and the information relating to the destination, the AR graphic rendering engine performs AR graphic rendering. [0340] The AR graphic rendering means outputting the graphic object on the windshield and the window so the graphic object is superimposed on the destination when the driver takes a look at the destination. [0341] Subsequently, based on at least one of the following: the camera-captured image, the coordinates of the graphic object for the destination in the camera coordinate system, and the information relating to the destination, the processor 870 (or the AR graphic rendering engine) outputs destination (POI) information so the destination (POI) information is superimposed on the destination (outputs the destination (POI) information to the vicinity of the destination) (S1080)).
However, should it be found Kim-485 fails to disclose, map and display the AR image to another display area of the driving image so that the detected object can be visually recognized, in the same field of endeavor, Kim-369 discloses, map and display the AR image to another display area of the driving image so that the detected object can be visually recognized (Kim-369: [0055] the additional data processing unit 116 may differently display an object in the complex area corresponding to the object selected in the list from other objects. The differently displayed selected object may be more prominently displayed with respect to the non-selected objects, for example, by displaying the selected object as a color or transparency different from the non-selected objects), for the benefit of improving visibility of each object by differently displaying each object from the background when providing an augmented reality (AR) service.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the display disclosed by a modified Kim-485 to include shuffling a display taught by Kim-369. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to improve visibility of each object by differently displaying each object from the background when providing an augmented reality (AR) service.
REGARDING CLAIM 24, Kim-485, as modified, remains as applied above to claim 23. Further, Kim-485 also discloses, the another display area is an adjacent location off a road (Kim-485: [FIG. 21AB, 23AB]).
However, should it be found Kim-485 fails to disclose, the another display area is an adjacent location off a road, in the same field of endeavor, Kim-369 discloses, the another display area is an adjacent location off a road (Kim-369: please see at least figures 9-21B), for the benefit of improving visibility of each object by differently displaying each object from the background when providing an augmented reality (AR) service.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the display disclosed by a modified Kim-485 to include shuffling a display taught by Kim-369. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to improve visibility of each object by differently displaying each object from the background when providing an augmented reality (AR) service.
Claim(s) 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20190180485 A1) (Kim-485) in view of Kim (US 20180066956 A1) (Kim-956).
REGARDING CLAIM 28, Kim-485 discloses, providing a digital signage platform (Kim-485: [0335] the processor 870 determines whether or not a distance between the vehicle and the destination (the POI) is equal to or shorter than a fixed distance x (S1030). [0336] When the distance between the vehicle and the destination (POI) is equal to or shorter than the fixed distance x, based on the object that corresponds to the destination (the POI), an image of which can be currently captured by a camera (or which is currently inside of camera view), the processor 870 selects a graphic object (a reference image) that is to be displayed so the graphic object is superimposed on the destination, from the database (S1040). [0337] Subsequently, using the image (a current camera frame) that is currently received through the camera and the selected graphic object (the reference image), the processor 870 matches the graphic object to an area (a POI area) that corresponds to the destination. In addition, the matching of the graphic object to the area that corresponds to the destination, as illustrated in FIG. 11A, means determining an output area on which the graphic object is displayed so the graphic object is superimposed on the destination when the driver takes a look at the destination. [0338] Further, when the distance between the vehicle and the destination (the POI) is longer than the fixed distance x, the processor 870 determines a location of the destination (or displays the location of the destination on the display unit 830), by performing interpolation of GPS information (information on the location of the vehicle and information on the location of the destination) and data (information) of the visual odometry and coordinate conversion of the camera image. The location of the determined destination includes coordinates of the graphic object for the destination (the POI) in a camera coordinate system) in response to a user input (Kim-485: [0012] providing a user interface that allows a destination that is set by a user); transmitting a request to the system for point of interest (POI) information associated with sensing data of the vehicle (Kim-485: [0335] the processor 870 determines whether or not a distance between the vehicle and the destination (the POI) is equal to or shorter than a fixed distance x (S1030). [0336] When the distance between the vehicle and the destination (POI) is equal to or shorter than the fixed distance x, based on the object that corresponds to the destination (the POI), an image of which can be currently captured by a camera (or which is currently inside of camera view), the processor 870 selects a graphic object (a reference image) that is to be displayed so the graphic object is superimposed on the destination, from the database (S1040). [0337] Subsequently, using the image (a current camera frame) that is currently received through the camera and the selected graphic object (the reference image), the processor 870 matches the graphic object to an area (a POI area) that corresponds to the destination. In addition, the matching of the graphic object to the area that corresponds to the destination, as illustrated in FIG. 11A, means determining an output area on which the graphic object is displayed so the graphic object is superimposed on the destination when the driver takes a look at the destination. [0338] Further, when the distance between the vehicle and the destination (the POI) is longer than the fixed distance x, the processor 870 determines a location of the destination (or displays the location of the destination on the display unit 830), by performing interpolation of GPS information (information on the location of the vehicle and information on the location of the destination) and data (information) of the visual odometry and coordinate conversion of the camera image. The location of the determined destination includes coordinates of the graphic object for the destination (the POI) in a camera coordinate system); recognizing a spatial location of a building area including a plurality of pieces of POI information detected by the system (Kim-485: [0335] the processor 870 determines whether or not a distance between the vehicle and the destination (the POI) is equal to or shorter than a fixed distance x (S1030). [0336] When the distance between the vehicle and the destination (POI) is equal to or shorter than the fixed distance x, based on the object that corresponds to the destination (the POI), an image of which can be currently captured by a camera (or which is currently inside of camera view), the processor 870 selects a graphic object (a reference image) that is to be displayed so the graphic object is superimposed on the destination, from the database (S1040). [0337] Subsequently, using the image (a current camera frame) that is currently received through the camera and the selected graphic object (the reference image), the processor 870 matches the graphic object to an area (a POI area) that corresponds to the destination. In addition, the matching of the graphic object to the area that corresponds to the destination, as illustrated in FIG. 11A, means determining an output area on which the graphic object is displayed so the graphic object is superimposed on the destination when the driver takes a look at the destination. [0338] Further, when the distance between the vehicle and the destination (the POI) is longer than the fixed distance x, the processor 870 determines a location of the destination (or displays the location of the destination on the display unit 830), by performing interpolation of GPS information (information on the location of the vehicle and information on the location of the destination) and data (information) of the visual odometry and coordinate conversion of the camera image. The location of the determined destination includes coordinates of the graphic object for the destination (the POI) in a camera coordinate system); displaying an augmented reality (AR) image corresponding to the plurality of pieces of POI information on a display area corresponding to the spatial location of the building area (Kim-485: [0335] the processor 870 determines whether or not a distance between the vehicle and the destination (the POI) is equal to or shorter than a fixed distance x (S1030). [0336] When the distance between the vehicle and the destination (POI) is equal to or shorter than the fixed distance x, based on the object that corresponds to the destination (the POI), an image of which can be currently captured by a camera (or which is currently inside of camera view), the processor 870 selects a graphic object (a reference image) that is to be displayed so the graphic object is superimposed on the destination, from the database (S1040). [0337] Subsequently, using the image (a current camera frame) that is currently received through the camera and the selected graphic object (the reference image), the processor 870 matches the graphic object to an area (a POI area) that corresponds to the destination. In addition, the matching of the graphic object to the area that corresponds to the destination, as illustrated in FIG. 11A, means determining an output area on which the graphic object is displayed so the graphic object is superimposed on the destination when the driver takes a look at the destination. [0338] Further, when the distance between the vehicle and the destination (the POI) is longer than the fixed distance x, the processor 870 determines a location of the destination (or displays the location of the destination on the display unit 830), by performing interpolation of GPS information (information on the location of the vehicle and information on the location of the destination) and data (information) of the visual odometry and coordinate conversion of the camera image. The location of the determined destination includes coordinates of the graphic object for the destination (the POI) in a camera coordinate system) in an order corresponding to a floor-by-floor basis of the building area (Kim-485: [0257] The communication unit 810 receives information relating to a destination from the external apparatus. Pieces of information relating to the destination can include an image that is obtained by image-capturing the destination, a location of the destination, a type of the destination, information relating to a building (for example, a structure of the building and information on shops on each floor of the building) when the destination is within the building, and information relating to a parking lot in or corresponding to the destination; [0342] The destination information is the information relating to the destination, and includes a name of the destination and various pieces of information (for example, a floor on which the destination is positioned, opening hours, a preference level, a grade average, an evaluation report, a destination mark, and parking-lot information) relating to the destination; [0385] At this time, the processor 870 further outputs a graphic object (or a graphic object that is output in a third way) that includes information indicating an available parking space on each floor of the parking lot, to the display unit 830); recognizing an object located near the vehicle based on sensing data of the vehicle (Kim-485: [0335] the processor 870 determines whether or not a distance between the vehicle and the destination (the POI) is equal to or shorter than a fixed distance x (S1030). [0336] When the distance between the vehicle and the destination (POI) is equal to or shorter than the fixed distance x, based on the object that corresponds to the destination (the POI), an image of which can be currently captured by a camera (or which is currently inside of camera view), the processor 870 selects a graphic object (a reference image) that is to be displayed so the graphic object is superimposed on the destination, from the database (S1040). [0337] Subsequently, using the image (a current camera frame) that is currently received through the camera and the selected graphic object (the reference image), the processor 870 matches the graphic object to an area (a POI area) that corresponds to the destination. In addition, the matching of the graphic object to the area that corresponds to the destination, as illustrated in FIG. 11A, means determining an output area on which the graphic object is displayed so the graphic object is superimposed on the destination when the driver takes a look at the destination. [0338] Further, when the distance between the vehicle and the destination (the POI) is longer than the fixed distance x, the processor 870 determines a location of the destination (or displays the location of the destination on the display unit 830), by performing interpolation of GPS information (information on the location of the vehicle and information on the location of the destination) and data (information) of the visual odometry and coordinate conversion of the camera image. The location of the determined destination includes coordinates of the graphic object for the destination (the POI) in a camera coordinate system).
Kim-485 discloses changing a manner in which the AR image is displayed (Kim-485: [0325]; [0400]). Kim-485 does not explicitly disclose, changing a manner in which the AR image is displayed in response to a determination that the object is determined to interfere with driving of the vehicle based on the sensing data.
However, in the same field of endeavor, Kim-956 discloses, changing a manner in which the AR image is displayed in response to a determination that the object is determined to interfere with driving of the vehicle based on the sensing data (Kim-956: [0479] At this point, the augmented reality image 1930 is produced with the light projected onto the windshield (WS). In this case, from a point of view of a driver, the augmented reality image may be seen as being displayed not within the display area 1920, but beyond the display area 1920 and outside the vehicle 100. That is, the augmented reality image 1930 may be perceived as a virtual image hovering in the air at a predetermined distance ahead of the vehicle 100. For example, the augmented reality image 1930 may be a graphic object which provides information on the boundary of the object 1901, speeds, collision warning, etc), for the benefit of warning an operator of a possible collision.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the display disclosed by a modified Kim-485 to include a collision warning taught by Kim-956. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to warn an operator of a possible collision.
Response to Arguments
Applicant’s arguments with respect to the rejection of the independent claim(s) under 35 USC §103, obviousness, have been considered but are moot because the new ground of rejection does not rely on the reference combination applied in the prior rejection of record for matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARRON SANTOS whose telephone number is (571)272-5288. The examiner can normally be reached Monday - Friday: 8:00am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANGELA ORTIZ can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.S./Examiner, Art Unit 3663
/ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663