DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-5, 9-13 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bruns U.S. Patent Application 20200098164 in view of Montgomery U.S. Patent Application 20100156906, and further in view of Watanabe U.S. Patent Application 20230043075.
Regarding claim 9, Bruns discloses an electronic device, comprising:
a storage device (memory) storing at least one instruction; and at least one processor (processor), when the at least one instruction is executed by the at least one processor (paragraph [0054]: a software program which may be stored in a memory and is executable by a processor), the at least one processor is caused to:
construct a virtual environment system, which comprises at least one preset scene (paragraph [0060]: video from multiple cameras to synthesize a virtual rendered video of an environment as if captured by a floating virtual camera; paragraph [0065]: if the plurality of input images include four input images taken from four different first locations on a vehicle (e.g., as illustrated in FIG. 2A), the second location may be located above the vehicle or at another location outside the vehicle; paragraph [0099]: View generation functions may be performed one time for each view specification, which may include a virtual camera position and orientation relative to the car. The output of these functions may be a set of parameters from which a defined view may be repeatedly generated to form a video output using image samples from one to four (or more) cameras; preset scene includes front, rear, right, left and road condition);
construct a virtual camera devices corresponding to the surround view monitor in the virtual environment system (paragraph [0099]: View generation functions may be performed one time for each view specification, which may include a virtual camera position and orientation relative to the car. The output of these functions may be a set of parameters from which a defined view may be repeatedly generated to form a video output using image samples from one to four (or more) cameras; paragraph [0013]: FIG. 2A is an illustration of raw camera data taken from four cameras attached to a vehicle; paragraph [0074]: At 402, a geometric layout of the environment is determined. For example, the geometric layout of an environment surrounding the first locations associated with the plurality of input images may be determined; paragraph [0076]: At 404, a projection surface is determined. The projection surface may include a basis-spline surface approximating the environment surrounding the first locations associated with the plurality of input images);
obtain a plurality of simulated images by sampling the at least one preset scene using the virtual camera devices; generate a surround view image corresponding to the plurality of simulated images using the surround view monitor (paragraph [0122]: Creating a new view for some virtual camera position using image samples from a source image may involve two projective transformations; paragraph [0099]: View generation functions may be performed one time for each view specification, which may include a virtual camera position and orientation relative to the car. The output of these functions may be a set of parameters from which a defined view may be repeatedly generated to form a video output using image samples from one to four (or more) cameras; paragraph [0081]: each input image of the plurality of input images may be projected onto the projection surface to produce a rendered environment, and the output image may be rendered by determining how the rendered environment on the projection surface would appear if viewed from the perspective described by the view specification); and
the surround view monitor in a plurality of dimensions (paragraph [0013]: FIG. 2A is an illustration of raw camera data taken from four cameras attached to a vehicle; paragraph [0065]: if the plurality of input images include four input images taken from four different first locations on a vehicle (e.g., as illustrated in FIG. 2A), the second location may be located above the vehicle or at another location outside the vehicle).
Bruns discloses all the features with respect to claim 9 as outlined above. However, Bruns fails to disclose a plurality of virtual camera explicitly, and score the monitor based on the image.
Montgomery discloses a plurality of virtual camera (paragraph [0042]: Once the virtual environment of a real-world course has been created 405, virtual cameras can then be placed in relation to the virtual environment 410. Each virtual camera can present a field of view 415, the field of view capturing a portion of the virtual environment and displaying it to the user).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Bruns’ to use plurality of virtual cameras as taught by Montgomery, to enable users to pre-visualize a physical environment by creating a 3D virtual environment of a physical environment.
Bruns as modified by Montgomery discloses all the features with respect to claim 9 as outlined above. However, Bruns as modified by Montgomery fails to disclose score the monitor based on the image.
Watanabe discloses score the monitor based on the image (paragraph [0070]: a rank value of the visual range of the camera image, a rank value of the brightness of the camera image and the like may be mentioned as examples of the visual range information).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Bruns and Montgomery’s to score camera images as taught by Watanabe, to improve image quality while reducing the labor required for maintaining and managing the hardware resources.
Regarding claim 10, Bruns as modified by Montgomery and Watanabe discloses the electronic device according to claim 9, wherein the at least one preset scene comprises a first scene and a second scene; the first scene comprises a preset number of plane images each of which has a preset pattern, and the second scene comprises a plurality of virtual obstacles that are three-dimensional (Bruns’ paragraph [0013]: FIG. 2A is an illustration of raw camera data taken from four cameras attached to a vehicle; paragraph [0065]: if the plurality of input images include four input images taken from four different first locations on a vehicle (e.g., as illustrated in FIG. 2A), the second location may be located above the vehicle or at another location outside the vehicle; preset scene includes front, rear, right, left and road condition; paragraph [0075]: if a particular input image shows a road, another vehicle, and/or a tree or other object, the ranging apparatus may estimate distances from the first location of the particular input image to various points on the road, vehicle, tree, or other object; paragraph [0078]: if it is determined through a ranging procedure or another mechanism that an object such as a second vehicle is in close proximity to the first vehicle, the elliptically cylindrical projection surface may be modified to include an approximation of the location and shape of the second vehicle; Montgomery’s paragraph [0005]: creating a 3D virtual environment of a physical environment).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Bruns’ to use plurality of virtual cameras as taught by Montgomery, to enable users to pre-visualize a physical environment by creating a 3D virtual environment of a physical environment; and combine Bruns and Montgomery’s to score camera images as taught by Watanabe, to improve image quality while reducing the labor required for maintaining and managing the hardware resources.
Regarding claim 11, Bruns as modified by Montgomery and Watanabe discloses the electronic device according to claim 9, wherein the surround view monitor is applied to a vehicle, and the surround view monitor comprises a plurality of camera devices installed in the vehicle and a surround view model (Bruns’ paragraph [0013]: FIG. 2A is an illustration of raw camera data taken from four cameras attached to a vehicle; paragraph [0065]: if the plurality of input images include four input images taken from four different first locations on a vehicle (e.g., as illustrated in FIG. 2A), the second location may be located above the vehicle or at another location outside the vehicle).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Bruns’ to use plurality of virtual cameras as taught by Montgomery, to enable users to pre-visualize a physical environment by creating a 3D virtual environment of a physical environment; and combine Bruns and Montgomery’s to score camera images as taught by Watanabe, to improve image quality while reducing the labor required for maintaining and managing the hardware resources.
Regarding claim 12, Bruns as modified by Montgomery and Watanabe discloses the electronic device according to claim 11, wherein the at least one processor constructs the plurality of virtual camera devices corresponding to the surround view monitor in the virtual environment system by:
constructing a virtual vehicle corresponding to the vehicle in the virtual environment system according to a size of the vehicle (Bruns’ paragraph [0078]: if it is determined through a ranging procedure or another mechanism that an object such as a second vehicle is in close proximity to the first vehicle, the elliptically cylindrical projection surface may be modified to include an approximation of the location and shape of the second vehicle; paragraph [0079]: as a vehicle approaches closer to the vehicle that has mounted thereon the plurality of cameras that produce the input images, the size of the horizontal portion may be reduced to match the distance to the approaching vehicle; paragraph [0015]: FIG. 2C is an illustration of an output image from a viewpoint above and behind the vehicle produced from the raw camera data shown in FIG. 2A; paragraph [0215]: FIG. 22 is an example of a rendered view. The missing blocks in the center of the image are caused by the car itself blocking the view of the ground from the four cameras. The rest of the image may be composed of data from three cameras with blended transitions between them; shape and size are an obvious matter of design choice); and
acquiring system parameters of the surround view monitor, and constructing the plurality of virtual camera devices installed in the virtual vehicle in the virtual environment system based on the system parameters (Bruns’ paragraph [0099]: View generation functions may be performed one time for each view specification, which may include a virtual camera position and orientation relative to the car. The output of these functions may be a set of parameters from which a defined view may be repeatedly generated to form a video output using image samples from one to four (or more) cameras; Montgomery’s paragraph [0042]: Once the virtual environment of a real-world course has been created 405, virtual cameras can then be placed in relation to the virtual environment 410. Each virtual camera can present a field of view 415, the field of view capturing a portion of the virtual environment and displaying it to the user);
wherein the system parameters comprise camera parameters of each of the plurality of camera devices in the vehicle, and installation parameters of each camera device relative to the vehicle (Bruns’ paragraph [0178]: The programs in the model software application suite may utilize many parameters to describe information about the source cameras, the desired views, and each 16×16 block or tile in the rendered view image; paragraph [0120]: These parameters may be used to generate any number of views around the car using the same set of source cameras as long as the relative positions and orientations of the cameras do not change).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Bruns’ to use plurality of virtual cameras as taught by Montgomery, to enable users to pre-visualize a physical environment by creating a 3D virtual environment of a physical environment; and combine Bruns and Montgomery’s to score camera images as taught by Watanabe, to improve image quality while reducing the labor required for maintaining and managing the hardware resources.
Regarding claim 13, Bruns as modified by Montgomery and Watanabe discloses the electronic device according to claim 9, wherein the at least one processor generates the surround view image corresponding to the plurality of simulated images using the surround view monitor by:
inputting the plurality of simulated images into the surround view monitor, and generating the surround view image corresponding to the plurality of simulated images using a surround view model in the surround view monitor (Watanabe’s paragraph [0004]: A model that maps an inputted low-resolution image to a high-resolution image is obtained by machine learning; paragraph [0048]: super-resolution technology that uses an image quality improving model is utilized for the image quality improvement processing... Super-resolution technology can map an input low-resolution image to a high-resolution image. Note that, various techniques which utilize various image quality improving models such as resolution enhancement, darkness improvement, backlight improvement, rain removal, fog removal, and blur prevention have been proposed as techniques of the super-resolution technology).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Bruns’ to use plurality of virtual cameras as taught by Montgomery, to enable users to pre-visualize a physical environment by creating a 3D virtual environment of a physical environment; and combine Bruns and Montgomery’s to score camera images as taught by Watanabe, to improve image quality while reducing the labor required for maintaining and managing the hardware resources.
Claim 1 recites the functions of the apparatus recited in claim 9 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 9 applies to the method steps of claim 1.
Claim 2 recites the functions of the apparatus recited in claim 10 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 10 applies to the method steps of claim 2.
Claim 3 recites the functions of the apparatus recited in claim 11 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 11 applies to the method steps of claim 3.
Claim 4 recites the functions of the apparatus recited in claim 12 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 12 applies to the method steps of claim 4.
Claim 5 recites the functions of the apparatus recited in claim 13 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 13 applies to the method steps of claim 5.
Claim 17 recites the functions of the apparatus recited in claim 9 as medium steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 9 applies to the medium steps of claim 17.
Claim 18 recites the functions of the apparatus recited in claim 10 as medium steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 10 applies to the medium steps of claim 18.
Claim 19 recites the functions of the apparatus recited in claim 11 as medium steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 11 applies to the medium steps of claim 19.
Claim 20 recites the functions of the apparatus recited in claim 12 as medium steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 12 applies to the medium steps of claim 20.
Allowable Subject Matter
Claim 6-7 and 14-16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claim 6 and 14 are about determining an optimal visual range based on a gyration radius of the vehicle to which the surround view monitor is applied and a size of the vehicle; determining a real visual range of the surround view monitor based on the surround view image; and determining a visual range score of the surround view monitor based on a ratio between the real visual range and the optimal visual range.
Bruns 20200098164, Montgomery 20100156906, and Watanabe 20230043075 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter.
Claim 7 and 15 are about determining an image area corresponding to each plane image in the surround view image according to the preset pattern;
determining a corner position error according to corner positions of each plane image and corner positions of the corresponding image area; determining a number of image areas each of which having the corner position error, and determining a plane distortion proportion of the surround view monitor based on the determined number and a total number of plane images in the surround view image; and determining a score of the degree of plane distortion of the surround view monitor based on the corner position error corresponding to each of all the plane images and the plane distortion proportion.
Bruns 20200098164, Montgomery 20100156906, Watanabe 20230043075 and Sabo 20240355041 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter.
Claim 8 and 16 are about performing an obstacle recognition on the surround view image using a preset obstacle recognition model and obtaining a predicted category of each predicted obstacle in the surround view image; determining a misjudgment score of each predicted obstacle based on a comparison result between the predicted category of each predicted obstacle and an actual category of the virtual obstacle;
determining a position weight of each predicted obstacle according to a position range to which a position of each predicted obstacle in the surround view image belongs; determining a risk score of each predicted obstacle based on the misjudgment score and the position weight of each predicted obstacle; determining a safety coefficient score of the surround view monitor based on the risk score of each of all predicted obstacles.
Bruns 20200098164, Montgomery 20100156906, Watanabe 20230043075 and Nix 20190164430 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yi Yang whose telephone number is (571)272-9589. The examiner can normally be reached on Monday-Friday 9:00 AM-6:00 PM EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/YI YANG/
Primary Examiner, Art Unit 2616