DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 12, 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1,6,11-12,17,20,22 rejected under 35 U.S.C. 103 as being unpatentable over TANI; Noriyuki et al. (US 20220319192 A1) in view of ITSUKAICHI; Hiroki et al. (US 20230091500 A1) in view of Hanamoto; Takashi et al. (US 20190213791 A1) in view of Tauchi; Makiko et al. (US 20080186382 A1)
Regarding claim 1, Tani teaches,
A camera view share system (¶52-54 and Fig. 5, “transmission of the captured image” requested from the first vehicle 10A of “the host vehicle to the second vehicle 30A” resulting in transmitting of “captured image from the second vehicle 30A to the first vehicle 10A”) comprising
a road-side unit (¶95 and 54, “the acquisition unit 11 and the composition unit 12 as the processing unit” included in the server device) configured to:
receive a request (¶54, “a request” from the host vehicle for a composite image) for a bird-eye-view map (¶54, “a composite image” such as a “bird’s eye-view image”) from an ego vehicle, (¶54, a request from the “host vehicle”) the bird-eye-view map (¶53-54 and Fig. 5, “composite image 100D of the bird’s-eye view image” as depicted in Fig. 5) including a plurality of vehicles; (¶53-54 and Fig. 5, composite image 100D depicted in Fig. 5 displays “the first vehicle 10A of the host vehicle” and includes “an image of the third vehicle 30B” in front of the “second vehicle 30A”)
transmit the bird-eye-view map (¶53-54 and Fig. 5, “distributes the composite image” such as “bird’s-eye view image”) to the ego vehicle; (¶54 and fig. 5, “composite image is acquired and displayed in the host vehicle” such as a “bird’s-eye view image”)
receive a selection of a target vehicle (¶50-54 and Fig. 5, “image captured by each vehicle is transmitted to a server device” associated with transmitted request for “the captured image” by the host vehicle “10A” sent to the “second vehicle 30A of another vehicle”) that is one of the plurality of vehicles from the ego vehicle; (¶52 and 54, “requesting the captured image” by the host vehicle 10A sent to “second vehicle 30A”, where the captured images were captured by “each vehicle” being the images captured by the “wide range of vehicles”) and
transmit a camera feed (¶54, “transfer the captured images between a plurality of vehicles by a relay method”) of the target vehicle (¶54, “acquire the captured images captured by the plurality of vehicles” which is then displayed on the “host vehicle”) to the ego vehicle. (¶54 “the host vehicle”)
But does not explicitly teach,
a road-side unit comprising a controller
receive, from the ego vehicle, a selection of a target vehicle that is one of the plurality of vehicles included in the bird-eye-view map, the selection of the target vehicle provided by a driver of the ego vehicle;
receive a plurality of camera feeds from the target vehicle, each of the plurality camera feeds related to a different view of the target vehicle;
receive, from the ego vehicle, a request for a directional view of the target vehicle; and
transmit, to the ego vehicle, a camera feed among the plurality of camera feeds, wherein the camera feed is matched with the directional view of the target vehicle.
However, Itsukaichi teaches additionally,
a road-side unit (¶43,34 and Fig. 3, “image generation apparatus 20” installed in a surveillance center) comprising a controller (¶43 and Fig. 3, image generation apparatus 20 which includes “a data processing unit 220”) configured to:
receive a selection of a target vehicle (¶36 and Fig. 1, “a predetermined input” to the “image generation apparatus 20” requesting the “sending apparatus 10 for an image”) among the plurality of vehicles; (¶34-36 and Fig. 1, image requested from the sending apparatus being one of a “plurality of sending apparatuses 10” as depicted in Fig. 1)
receive a plurality of camera feeds (¶35-37, fig. 1 and 2, “image generation apparatus 20” sent requested images generated by “image capturing unit 12” of repeatedly photographed “vicinity of the first vehicle 30”) from the target vehicle, (¶35-37, fig. 1 and 2, “image generation apparatus 20 requests the sending apparatus 10 for an image”) each of the plurality camera feeds (¶37, “image capturing unit 12” being a “stereo camera”) related to a different view of the target vehicle; (¶37, stereo camera “image capturing unit 12” photographs the vicinity “in front, by the side, and in back” of the first vehicle 30)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi which discloses an image generating apparatus with a processor that can be direct to receiving inputs to request images for a particular target vehicle. This type of selection allows for improvements in the quality of surveillance.
However, Hanamoto teaches additionally,
receive, from the ego vehicle, (¶74,58, fig. 3A and 5, “bird’s eye image display area 300” made use of for operation of a “virtual camera”) a selection of a target vehicle (¶74, fig. 5 and 6a-6c, select one “image” or one “position” on the path displayed on the bird’s eye image display area 300) that is one of the plurality of vehicles (¶74, user selects by a “user touching” one of the “images 603” depicted in fig. 6c) included in the bird-eye-view map, (¶74 and fig. 6c, images or position “displayed on the bird’s eye image display area 300” depicted in fig. 6c) the selection of the target vehicle provided by a driver of the ego vehicle; (¶74 and fig. 6c, “user selects one of the plurality of thumbnail images or one position on the gaze point path displayed on the bird's eye image display area 300” )
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto that processes a user’s selection of a virtual viewpoint. This allows a user to determine the movement path for processing, improving user convenience.
However, Tauchi teaches additionally,
receive, from the ego vehicle, (¶38-39 and fig. 6, generation process of the “self car 12” circumference overlook images) a request for a directional view of the target vehicle; (¶38-39 and fig. 6, map “information of the other car 13 in the surroundings of the self car 12 is generated” based on analyzed “existence direction of the other car 13 relative to the self car 12 as well as the distance information regarding the self car 12 and the other car 13 are acquired”) and
transmit, to the ego vehicle, a camera feed (¶38-39 and fig. 6 and 4, “self car 12” at step S14 acquires “images from the camera 11' on the other car 13 in the surroundings of the self car 12” depicted in fig. 4) among the plurality of camera feeds, (¶39,36,30, fig. 4 and 2, “camera 11' on the other car 13 in the surroundings of the self car 12” including cars 13F, 13B, as depicted in fig. 4, which include “a front right camera 11a, a front left camera 11b, a rear right camera 11c, a rear left camera 11d, a right forward camera 11e, a right backward camera 11f, a left forward camera 11g, a left backward camera 11h” depicted in fig. 2) wherein the camera feed is matched with the directional view of the target vehicle. (¶39,30 fig. 6 and 2, “position of the extracted image of the other car relative to the self car 12 is detected by considering the photography direction and the field of vision of each of the cameras” from the camera group 11 as presented in fig. 2)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi that analyzes images from camera groups on neighboring vehicles based on coordinates positions of the other cars to a self car. This allows for image acquisition that leads to generating a view sufficiently precise for identification of distant vehicles from the subject vehicle.
Regarding claim 6, Tani with Itsukaichi with Hanamoto with Tauchi teaches the limitations of claim 1,
Itsukaichi teaches additionally,
area of the bird-eye-view map (¶65 and Fig. 7, reconfigured image includes “a bird’s eye view” related to a plurality of directions of the first vehicle 30) is within a coverage area (¶63-65,79,34, Fig. 7 and 1, data processing unit 220 generates “a bird’s eye view as a reconfigured image” related to analysis data of a “plurality of directions of the first vehicle 30” associated “position for each object by using position information of the first vehicle 30 and a first piece of analysis data” depicted in Fig. 7 surveyed by “a surveillance person” surveying “a road and a vehicle 30” in a surveillance center depicted in Fig. 1) of the road-side unit. (¶62-65,34, and Fig. 7 and 1, “data processing unit 220 of the image generation apparatus 20 requests the sending apparatus 10 for all pieces of analysis data according to needs” which includes data relating to an “outside of the first vehicle 30” selected from input by a surveillance person” surveying “a road and a vehicle 30” in a surveillance center depicted in Fig. 1)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi which discloses an image generating apparatus with a processor that can be direct to receiving inputs to request images for a particular target vehicle. This type of selection allows for improvements in the quality of surveillance.
Regarding claim 11, Tani with Itsukaichi with Hanamoto with Tauchi teaches the limitations of claim 1,
Tani teaches additionally,
ego vehicle (¶23 and Fig. 1, “host vehicle 10” depicted in Fig. 1) comprising a controller (¶23 and Fig. 1, “host vehicle 10” includes a “composition unit 12” as a processing unit) configured to:
display the camera feed on a display device (¶25-26 and Fig. 1, “display unit 13 displays the visible composite image”, where the “composition unit 12 outputs the visible composite image generated by the composition processing to the display unit 13”) of the ego vehicle. (¶23,25-26, and Fig. 1, “host vehicle 10” includes “a display unit 13” depicted in Fig. 1)
Regarding claim 12, it is the method claim of system claim 1. Refer to rejection of claim 1 to teach the limitations of claim 12.
Regarding claim 17, dependent on claim 12, it is the method claim of system claim 6, dependent on claim 1.
Tani teaches additionally,
a road-side unit (¶95 and 54, “the acquisition unit 11 and the composition unit 12 as the processing unit” included in the server device) communicating with the ego vehicle. (¶95 and 54, processing unit distributes images to first vehicle which issued “request from the host vehicle”)
Itsukaichi teaches additionally,
area of the bird-eye-view map (¶65 and Fig. 7, reconfigured image includes “a bird’s eye view” related to a plurality of directions of the first vehicle 30) is within a coverage area (¶63-65,79,34, Fig. 7 and 1, data processing unit 220 generates “a bird’s eye view as a reconfigured image” related to analysis data of a “plurality of directions of the first vehicle 30” associated with position for each object “using position information of the first vehicle 30 and a first piece of analysis data” depicted in Fig. 7 which is chosen by “a surveillance person” for surveying “a road and a vehicle 30” in a surveillance center with the “image generation apparatus 20” depicted in Fig. 1) of a road-side unit (¶62-65,34, and Fig. 7 and 1, “data processing unit 220 of the image generation apparatus 20 requests the sending apparatus 10 for all pieces of analysis data according to needs” which includes data relating to an “outside of the first vehicle 30” selected from input by a surveillance person” surveying “a road and a vehicle 30” in a surveillance center depicted in Fig. 1)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi which discloses an image generating apparatus with a processor that can be direct to receiving inputs to request images for a particular target vehicle. This type of selection allows for improvements in the quality of surveillance.
Regarding claim 20, it is the non-transitory computer-readable medium storing programs claim of system claim 1.
Itsukaichi teaches additionally,
A non-transitory computer-readable medium (¶56 and Fig. 4, “storage device 1040”) storing programs that, (¶56 and Fig. 4, storage device 1040 “stores a program module that achieves a function”) when executed by a controller, (¶53,56, and Fig. 4, “processor 1020 achieves each function associated with each program module” stored in the storage device 1040”) cause the controller (¶49-56 and Fig. 3-4, “image generation apparatus 20” achieves a function such as that of “data processing unit 220”)
Refer to rejection of claim 1 to teach the limitations of claim 20.
Regarding claim 22, Tani with Itsukaichi with Hanamoto with Tauchi teaches the limitations of claim 1,
Tauchi teaches additionally,
receive a selection of another target vehicle (¶36-39 and fig. 4, acquiring “images from the camera 11’ on the other car 13 in the surroundings of the self car 12” such as other cars 13F, 13B based on mapping information of the other cars 13 depicted in fig. 4) among the plurality of vehicles (¶36-39, fig. 6 and 4, “other car 13 (including 13F, 13B)” traveling around the self car 12 depicted in fig. 4) from the ego vehicle; (¶36-39 fig. 6 and 4 and fig. , “self car 12” depicted in fig. 4) and
transmit a camera feed (¶36,36, and fig. 6, “in-vehicle camera 11’” on the other car 13 (including 13F, 13B)) related to the another target vehicle to the ego vehicle. (¶39,36, and fig. 6, acquiring the images from the “camera 11' on the other car 13 in the surroundings of the self car 12” included in the other cars 13 such as “13F, 13B”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi which has a car requesting video from a neighboring car. This allows for a cooperative system that can enhance effectiveness of event data recording.
Claim(s) 2,13 rejected under 35 U.S.C. 103 as being unpatentable over TANI; Noriyuki et al. (US 20220319192 A1) in view of ITSUKAICHI; Hiroki et al. (US 20230091500 A1) in view of Hanamoto; Takashi et al. (US 20190213791 A1) in view of Tauchi; Makiko et al. (US 20080186382 A1) in view of Barfield, JR.; James Ronald et al. (US 20160093212 A1)
Regarding claim 2, Tani with Itsukaichi with Hanamoto with Tauchi teaches the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 2,
However, Barfield teaches additionally,
receive a plurality of requests (¶35, “receiving, from a telematics device of the vehicle, an alert” associated with a “request for an aerial image”) from a plurality of vehicles; (¶35 and 40, “received image requests” including vehicle request for an aerial image received by an “image processing and analysis component 230”) and
determine priorities among the plurality of requests based on information about the requests; (¶40, processing and analysis component 230 may “rank or prioritize received image requests based on the time sensitivity of the images”) and
select the request (¶40, “prioritize image requests”) from the ego vehicle (¶40 and 35, “prioritize image requests” based on “information identifying the type of alert” such as “image requests relating to vehicle collisions” that are “prioritized ahead”) based on the determined priorities. (¶40, “image requests relating to vehicle collisions may generally be prioritized ahead of image requests that are used to observe road conditions”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi with the request prioritization of Barfield which prioritizes based on time sensitivity. This allows for assessing severity of task priority to determine the usefulness of images being requested.
Regarding claim 13, dependent on claim 12, it is the method claim of system claim 2, dependent on claim 1. Refer to rejection of claim 2 to teach the limitations of claim 13.
Claim(s) 8,9,19 rejected under 35 U.S.C. 103 as being unpatentable over TANI; Noriyuki et al. (US 20220319192 A1) in view of ITSUKAICHI; Hiroki et al. (US 20230091500 A1) in view of Hanamoto; Takashi et al. (US 20190213791 A1) in view of Tauchi; Makiko et al. (US 20080186382 A1) in view of Signell; Klas Roland Persson et al. (US 20240146882 A1)
Regarding claim 8, Tani with Itsukaichi with Hanamoto with Tauchi teaches the limitations of claim 1,
Tani teaches additionally,
receive a plurality of camera feeds from the plurality of vehicles; (¶23,27,36, “acquires captured images (self-produced images) captured by the plurality of cameras 21A, 21B, 21C, and 21D of the host vehicle and captured images (other-produced images) captured by other image capturing units” where there are “one or more” cameras 31 mounted on another vehicle 30 of the “plurality of other vehicles”)
store the plurality of camera feeds in the road-side unit; (¶54, “image captured by each vehicle is transmitted to a server device” accumulated in the server device)
but does not explicitly teach,
send the plurality of camera feeds from the road-side unit to a server.
However, Signell teaches additionally,
receive a plurality of camera feeds from the plurality of vehicles; (¶60 and Fig. 10, “requesting and receiving an image/video from one or more of the identified other vehicles”)
send the plurality of camera feeds from the road-side unit to a server. (¶60 and Fig. 10, “image/video” from “the infrastructure devices disposed in proximity to the vehicle” includes requesting “image video be sent to a cloud server”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi with the directional requesting of Signell which sends collected image video to a cloud server. This allows for gathering the most relevant data for the vehicle in a vehicle incident.
Regarding claim 9, Tani with Itsukaichi with Hanamoto with Tauchi with Signell teaches the limitations of claim 8,
Tani teaches additionally,
generate the bird-eye-view map (¶95, composite image of “the bird’s-eye view image obtained by compositing a large number of captured images”) based on the plurality of camera feeds (¶95, “composition unit 12 as the processing unit of the server device may composite the self-produced image captured by the image capturing unit (cameras 21A, 21B, 21C, and 21D) included in the first vehicle and the other-produced image acquired from the second vehicle”)
Signell teaches additionally,
server (¶43,11,60, and Fig. 7, server 200 in operation where “processor 202 is configured to execute software stored within the memory 210” such as to “alerting a driver or occupant” of the “image/video has been received”) comprising a controller (¶43,11, and 60, “processor 202” configured to execute instructions for “alerting a driver or occupant of the vehicle that the image/video has been received”) configured to:
generate based on the plurality of camera feeds in the server. (¶60, “alerting a driver or occupant” based on “image/video from the one or more of the identified other vehicles, the mobile devices, and/or the infrastructure devices disposed in proximity to the vehicle” that are received “at a cloud server”)
Tani discloses compositing a bird’s-eye view image by compositing a large number of images captured from multiple vehicles. Signell explicitly discloses that the multiple image sources are stored on a cloud server and generates something based on the stored image data. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi with the directional requesting of Signell which alerts a driver or occupant of collected image video on a cloud server. This allows for gathering the most relevant data for the vehicle in a vehicle incident.
Regarding claim 19, dependent on claim 12, it is the method claim of system claim 8, dependent on claim 1. Refer to rejection of claim 8 to teach the limitations of claim 19.
Claim(s) 4,15 rejected under 35 U.S.C. 103 as being unpatentable over TANI; Noriyuki et al. (US 20220319192 A1) in view of ITSUKAICHI; Hiroki et al. (US 20230091500 A1) in view of Hanamoto; Takashi et al. (US 20190213791 A1) in view of Tauchi; Makiko et al. (US 20080186382 A1) in view of Satomi; Tsuneo et al. (US 20200042805 A1)
Regarding claim 4, Tani with Itsukaichi with Hanamoto with Tauchi teaches the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 4,
However, Satomi teaches additionally,
identify an area where an average speed of vehicles (¶140,153-155, Fig. 17 and 19, “obtains the velocity information of the vehicle within the parking operation range”) is less than a threshold (¶153-155, Fig. 17 and 19, determines whether or not the “average velocity at the time of the entry into the parking slot is equal to or smaller than the threshold value Vt”) in response to receiving the request for the bird-eye-view map; (¶132,140,153-155, Fig. 17 and 19, determines that the maximum velocity until the vehicle is parked is equal to or smaller than the predetermined threshold value Vt at the “time of the entry into the parking slot” which is within the parking operation range) and
obtain the bird-eye-view map for the identified area. (¶155, Fig. 17 and 19, “when the determining unit 43 determines that the average velocity at the time of the entry into the parking slot is equal to or smaller than the threshold value Vt, then the bird's-eye view image 100 is displayed”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi with the display control of Satomi which generates a bird’s-eye view according to the velocity of the vehicle. By monitoring for the velocity to be slow, it allows for identifying a situation when parking operation is not easy so the system can provide a view that lets the driver appropriately check the surrounding of the vehicle.
Regarding claim 15, dependent on claim 12, it is the method claim of system claim 4, dependent on claim 1. Refer to rejection of claim 4 to teach the limitations of claim 15.
Claim(s) 5,16 rejected under 35 U.S.C. 103 as being unpatentable over TANI; Noriyuki et al. (US 20220319192 A1) in view of ITSUKAICHI; Hiroki et al. (US 20230091500 A1) in view of Hanamoto; Takashi et al. (US 20190213791 A1) in view of Tauchi; Makiko et al. (US 20080186382 A1) in view of Shimizu; Yoshiyuki et al. (US 20190275942 A1)
Regarding claim 5, Tani with Itsukaichi with Hanamoto with Tauchi teaches the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 5,
However, Shimizu teaches additionally,
identify a location of an incident (¶106, comparing unit 45 determines “the number of adjacent vehicles has increased”) in response to receiving the request for the bird-eye-view map; (¶106, comparing unit 45 determines the number of adjacent vehicles has increased when the “vehicle V1 exits the parked state”) and
obtain the bird-eye-view map (¶105-106 and Fig. 8, “synthesis processor 443 generates the bird's-eye view video 100 that displays the notification icon 120”) for an area including the location of the incident. (¶105-106 and Fig. 8, the bird's-eye view video 100 that “displays the notification icon 120 indicating the direction in which the added adjacent vehicle is present” as depicted in Fig. 8, when the “number of adjacent vehicles has increased”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi with the view generation of Shimizu which monitors for incident directions. This allows for displaying some form of notification when there are additional objects coming into distance of the generated bird’s-eye view.
Regarding claim 16, dependent on claim 12, it is the method claim of system claim 5, dependent on claim 1. Refer to rejection of claim 5 to teach the limitations of claim 16.
Claim(s) 7,18 rejected under 35 U.S.C. 103 as being unpatentable over TANI; Noriyuki et al. (US 20220319192 A1) in view of ITSUKAICHI; Hiroki et al. (US 20230091500 A1) in view of Hanamoto; Takashi et al. (US 20190213791 A1) in view of Tauchi; Makiko et al. (US 20080186382 A1) in view of Hongo; Hitoshi (US 20110001826 A1)
Regarding claim 7, Tani with Itsukaichi with Hanamoto with Tauchi teaches the limitations of claim 1,
Tani teaches additionally,
transmit, to a server, (¶54, “server device”) a request (¶54, server device receiving “request from the host vehicle” to request the server device distribute “the composite image”) for generating the bird-eye-view map (¶54, “a composite image such as a bird’s -eye view image” is generated and accumulated in the server device) in response to determining that the bird-eye-view map is not available. (¶54, composite image such as a bird’s-eye device “acquired from the server device” on the condition that “composition with the self-produced image is not essential”)
but does not explicitly teach,
determine whether the bird-eye-view map is available; and
However, Hongo teaches additionally,
determine whether the bird-eye-view map is available; (¶85 and Fig. 11, “checks whether or not there is (at least partial) image loss in the augmented bird's-eye-view image” indicating “an image-mission region is present” depicted in fig. 11) and
request for generating the bird-eye-view map (¶84-85,96, and Fig. 11 step S4 “image transformation portion 13 performs augmented bird's-eye transformation on the camera image” executed according to adjusted “augmented transformation parameters” depicted in fig. 11) in response to determining that the bird-eye-view map is not available. (¶84-85,96, and Fig. 11 performs augmented bird’s-eye transformation according to adjusted “augmented transform parameters” at step S6 if “it is judged that there is image loss in the augmented bird's-eye-view image” at step S5 depicted in fig. 11)
Tani discloses generating and requesting of bird’s-eye views generated by a server and a condition where self-produced images are not essential. The relaying of images from a server to a host vehicle display occurs under a condition where the composition image is not produced by the host vehicle. Hongo discloses a situation of determining the loss of a bird’s-eye view and requests a bird’s-eye view transformation according to determined loss, which is similar to the determination and request claimed. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi with the bird’s-eye view checking of Hongo which evaluates whether the bird’s-eye view has loss or not. This allows for correction a correction that makes image loss less likely to occur.
Regarding claim 18, dependent on claim 12, it is the method claim of system claim 7, dependent on claim 1. Refer to rejection of claim 7 to teach the limitations of claim 18.
Claim(s) 10 rejected under 35 U.S.C. 103 as being unpatentable over TANI; Noriyuki et al. (US 20220319192 A1) in view of ITSUKAICHI; Hiroki et al. (US 20230091500 A1) in view of Hanamoto; Takashi et al. (US 20190213791 A1) in view of Tauchi; Makiko et al. (US 20080186382 A1) in view of Signell; Klas Roland Persson et al. (US 20240146882 A1) in view of SASAKI; Hitoshi et al. (US 20230033706 A1)
Regarding claim 10, Tani with Itsukaichi with Hanamoto with Tauchi with Signell teaches the limitations of claim 9,
Tani teaches additionally,
send the bird-eye-view map (¶54, “server device distributes the composite image” in response to a request from the host vehicle) from the server (¶54, “composite image such as a bird’s-eye view image”)
Signell teaches additionally,
controller of the server (¶43,11, and 60, “processor 202” configured to execute instructions for “alerting a driver or occupant of the vehicle that the image/video has been received”)
but does not explicitly teach,
sending from the server to the road-side unit.
However, Sasaki teaches additionally,
sending bird’s-eye-view map from the server (¶84-85, “work support server 10 selects bird’s-eye image data” as map information) to the road-side unit. (¶84-85 and Fig. 1, bird’s-eye image data is “outputted to the image output device 221” of the remote operation apparatus 20 as depicted in Fig. 1)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi with the directional requesting of Signell with the server of Sasaki which selects a bird’s-eye image to send to a remote operation location that is not in the working machine. This allows for remote operation of a vehicle while still enabling recognition of relative positional relationships to perform work stably and reliably.
Claim(s) 21 rejected under 35 U.S.C. 103 as being unpatentable over TANI; Noriyuki et al. (US 20220319192 A1) in view of ITSUKAICHI; Hiroki et al. (US 20230091500 A1) in view of Hanamoto; Takashi et al. (US 20190213791 A1) in view of Tauchi; Makiko et al. (US 20080186382 A1) in view of Kim; Ju Won et al. (US 20200196126 A1)
Regarding claim 21, Tani with Itsukaichi with Hanamoto with Tauchi teaches the limitations of claim 1,
Tani teaches additionally,
bird-eye-view map (¶54, composite image such as “a bird’s-eye view image of a wide range”) is displayed on a display device of the ego vehicle; (¶54, “composite image is acquired and displayed in the host vehicle”)
but does not explicitly teach the additional limitations of claim 21,
However, Kim teaches additionally,
the selection of the target vehicle is made by a driver (¶68, “user to select a vehicle, to which a message will be transmitted”) of the ego vehicle (¶68, user to select a vehicle from neighboring vehicles “around the subject vehicle”) touching on the display device. (¶68,60,64, and fig. 1, user to select a vehicle based on generated “user interface for allowing a user to select” displayed on a “circular touchscreen 330” depicted in fig. 1)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the driving assistance of Tani with the data processing of Itsukaichi with the processing of Hanamoto with the image analysis of Tauchi with the input of Kim which allows the user to select a neighboring vehicle. This allows helps a user to determine if a current situation is an emergency condition.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIMMY S LEE whose telephone number is (571)270-7322. The examiner can normally be reached Monday thru Friday 10AM-8PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph G. Ustaris can be reached at (571) 272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH G USTARIS/Supervisory Patent Examiner, Art Unit 2483
/JIMMY S LEE/Examiner, Art Unit 2483