DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application is a National Stage application of PCT PCT/US23/63962. Priority to PCT/US23/63962 with a priority date of 03/08/2023 is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS dated 07/08/2025, 08/11/2025, and 10/10/2025 has been considered and placed in the application file.
Response to Arguments
The rejection under 35 U.S.C. 112(b) have been withdrawn in light of the Applicant’s amended claims.
Applicant's arguments with respect to the art rejections of claims 1 and 11 have been fully considered but they are not persuasive. The Applicant amended claim 1 to “generate a local map comprising a ground projection of image data onto a ground plane, the image data generated using one or more cameras of an ego-machine”. As stated by the Applicant on page 12 of Remarks, Stervik displays a captured image taken earlier by a front camera, effectively becoming the under-vehicle image after a few seconds of travel by the vehicle. The Applicant states that this cannot be considered a ground projection and the examiner respectfully disagrees. A ground projection through the broadest reasonable interpretation can be anything that is considered a representation of the ground. Stervik’s under vehicle display can be interpreted as a ground projection, as it contains elements of capturing a road in front of the vehicle and representing this as an under-vehicle display. The examiner recommends amending the ground projection of claim 1 to something more detailed and akin to what the Applicant explained during the Applicant initiated interview.
The Applicant states on page 13 of Remarks for claim 11, that the examiner is equating the claimed local map with a stored image and “Stervik’s captured image cannot establish both the claimed local map and the representation of the ground plane into which it is merged.” The examiner would like to clarify that claim 11 was rejected with respect to claim 1, as it shares similar elements to the latter claim. Using the cameras on their left, right, behind, etc., Stervik’s image is stitched together to create a bird’s eye view around their vehicle. A bird’s eye view is a view from above, hence “merging” itself to create a representation of a ground plane. Afterwards, it virtually reconstructs an area of the ground plane underneath the vehicle using the front camera (which is part of the local map representing the ground plane), as explained in the rejection of claim 1 and in the response to the argument above.
Applicant’s arguments with respect to claim(s) 3-4 and 13-14 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“The method of claim 1, wherein the method is performed by at least one of:
a control system for an autonomous or semi-autonomous machine;
a perception system for an autonomous or semi-autonomous machine;
a system for performing simulation operations;
a system for performing digital twin operations;
a system for performing light transport simulation;
a system for performing collaborative content creation for 3D assets;
a system for performing deep learning operations;
a system for performing conversational Al operations;
a system for generating synthetic data; ” in claim 10;
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 5-10, 11-12, 16-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Stervik (US 20170372147 A1).
Regarding claim 1, Stervik discloses a method comprising: generating, a local map comprising a ground projection of image data onto a ground plane, the image data generated using one or more cameras of an ego-machine (Stervik, paragraph [0051], Fig. 3 below. "The image may thus be stitched with the live stream taken by the cameras of the surrounding view, giving the driver a bird's eye view of the vehicle 10 including an under vehicle view when moving over the specific ground area,"),
PNG
media_image1.png
334
281
media_image1.png
Greyscale
updating a representation of the ground plane based at least on the local map (Stervik, paragraph [0055], "The image may thus be visualized as moving in correlation with how the vehicle moves,", as the vehicle continues moving, the images of the surrounding view are continuously updated),
and virtually reconstructing an area of the ground plane under the ego-machine based at least on retrieving a corresponding portion of the representation of the ground plane (Stervik, paragraph [0048], Fig. 5A/5B below, "As can be gleaned, as the vehicle 10 travels forward, the vehicle 10 travels over ground area 21″. The image taken on the ground area 21″ effectively becomes an under vehicle image to the vehicle 10,").
PNG
media_image2.png
292
488
media_image2.png
Greyscale
Regarding claim 2, Stervik discloses the method of claim 1, wherein the generating of the local map includes orienting the ground projection in a direction corresponding to a direction of ego-motion of the ego-machine (Stervik, paragraph [0051], Fig. 3 below, "FIG. 3 shows the vehicle 10 with the front, rear, left and right cameras 21, 22, 23, 24 field of view illustrated 21′, 22′, 23′, 24′. The field of view further illustrates an around view image. When displayed to the driver in the vehicle 10 the around view image is usually illustrated with a top view of the vehicle 10 as shown in FIG. 3," the map is oriented towards wherever the car is heading).
PNG
media_image1.png
334
281
media_image1.png
Greyscale
Regarding claim 5, Stervik discloses the method of claim 1, wherein the generating of the local map includes texturing the local map with color values of pixels of the image data that project onto the ground plane. (Stervik, paragraph, [0064], "Colors maybe adjusted in the images,").
Regarding claim 6, Stervik discloses the method of claim 1, wherein the representation of the ground plane is a composite map, and the updating of the composite map comprises merging the local map corresponding to a current time slice into a composite representation of local maps corresponding to one or more previous time slices (Stervik, paragraph [0048], Fig. 6A/6B below, "FIGS. 6A-6B show the vehicle 10 after travelling straight forward a limited distance. As can be seen the front camera 21 imaging an area represented by the field of view 21′, is still imaging in front the vehicle 10 to live stream that ground area to the driver. The image taken earlier by the front camera of the front field of view 21′, is imaging a ground area. The imaged ground area is referred to as ground area 21″. The ground area 21″ is fixed with respect to the vehicle 10 position at a specific time. As can be gleaned, as the vehicle 10 travels forward, the vehicle 10 travels over ground area 21″. The image taken on the ground area 21. A driver viewing the display unit, displaying the vehicle 10 as shown in FIGS. 5A, 6A, 7A for example, will thus perceive the taken image on the ground area 21″ as the ground under the vehicle 10, at the time when passing the ground area 21″. The actual image was however taken earlier and before passing over the ground area 21″. Hence no under vehicle cameras are necessary″ effectively becomes an under vehicle image to the vehicle 10," as shown in Fig. 6A, the front camera takes a photo of the ground the vehicle is under before the vehicle reaches that particular ground spot. The other cameras are unchanged and are continuously functioning what they are designated to. Thus, there is a merge of images between a previous time interval stream and a current time interval stream.).
PNG
media_image3.png
331
475
media_image3.png
Greyscale
Regarding claim 7, Stervik discloses the method of claim 1, wherein the representation of the ground plane is a composite map, and the updating of the composite map limits the composite map to representing local maps generated during a designated number of time slices (Stervik, paragraph [0048], "FIGS. 6A-6B show the vehicle 10 after travelling straight forward a limited distance", when a vehicle is set to move a certain distance, the cameras will naturally generate images until they arrive to where the vehicle stops).
Regarding claim 8, Stervik discloses the method of claim 1, wherein the retrieving of the corresponding portion of the representation of the ground plane texturizes the area under the ego-machine based at least on assigning a color value to at least one cell of one or more cells in a grid in the area under the ego- object, the color value being retrieved from a corresponding pixel of the representation of the ground plane (Stervik, paragraph [0064], “Defects due to exposure differences between images, camera response and chromatic aberrations, vignetting, and distortions may be reduced or removed. Image blending may thereafter be performed. When blending, the calibration step is implemented and usually involves rearranging the images to form an output projection. The purpose is to provide images with no seams, or to minimize the seams between the images. Colors maybe adjusted in the images”*, when the image of the vehicle underneath is shown on the display, the image naturally has pixel values that are retrieved from the image of the ground they took before reaching that ground spot. An image is made of a bunch of color pixels on a grid. The colors of this representation of the vehicle underneath can also be adjusted if needed).
*Images refer to the image stitching process as stated in paragraph [0063], “At 250 the stitched camera view is displayed on the display unit and preferably superposed an image representing the vehicle so at to give the driver a good bird's eye view of the vehicle, the surroundings and the ground underneath the vehicle”.
Regarding claim 9, Stervik discloses the method of claim 1, wherein the generating of the local map comprises populating the ground projection using a subset of the image data generated using a camera of the one or more cameras selected based at least on the camera being oriented corresponding to a direction of ego-motion of the ego-machine (Stervik, paragraph [0013], "The around view monitoring system may comprise one or more cameras, such as at least a front directed camera, a rear directed camera, a first and a second side camera. The first and the second cameras are preferably opposing side cameras i.e., left and right side cameras,").
Regarding claim 10, Stervik discloses the method of claim 1, wherein the method is performed by at least one of:
a control system for an autonomous or semi-autonomous machine;
a perception system for an autonomous or semi-autonomous machine;
a system for performing simulation operations;
a system for performing digital twin operations;
a system for performing light transport simulation;
a system for performing collaborative content creation for 3D assets;
a system for performing deep learning operations;
a system implemented using an edge device;
a system implemented using a robot;
a system for performing conversational Al operations;
a system for generating synthetic data;
a system incorporating one or more virtual machines (VMs);
a system implemented at least partially in a data center;
or a system implemented at least partially using cloud computing resources (Stervik, paragraph [0044], "The server system 200 may be a cloud based administrated server, adapted to store or forward data,").
Claim 11 corresponds to claim 1, additionally reciting a processor comprising: one or more circuits (Stervak, paragraph [0066], “As one skilled in the art would understand, the processing unit 100, cameras 110, illumination units 115, sensors 120, navigation unit 130, display unit 140, data input unit 150, communication unit 160, memory unit 180, and any other system, unit, or device described herein may individually, collectively, or in any combination comprise appropriate circuitry, such as one or more appropriately programmed processors (e.g., one or more microprocessors including central processing units (CPU)) and associated memory, which may include stored operating system software and/or application software executable by the processor(s) for controlling operation thereof and for performing the particular algorithms represented by the various functions and/or operations described herein, including interaction between and/or cooperation with each other”),
merge, into a representation of a ground plane representing portions of the ground plane observed by one or more sensors of an ego-object (Stervik, paragraph [0051], "The image may thus be stitched with the live stream taken by the cameras of the surrounding view, giving the driver a bird's eye view of the vehicle 10 including an under vehicle view when moving over the specific ground area,"). Thus, claim 11 is rejected for the same reasons of anticipation as claim 1.
Claims 12 and 15 corresponds to claims 2 and 10 respectively, additionally reciting the one or more processors of claim 11, the one or more circuits (Stervik, paragraph [0066], “As one skilled in the art would understand, the processing unit 100, cameras 110, illumination units 115, sensors 120, navigation unit 130, display unit 140, data input unit 150, communication unit 160, memory unit 180, and any other system, unit, or device described herein may individually, collectively, or in any combination comprise appropriate circuitry, such as one or more appropriately programmed processors (e.g., one or more microprocessors including central processing units (CPU)) and associated memory, which may include stored operating system software and/or application software executable by the processor(s) for controlling operation thereof and for performing the particular algorithms represented by the various functions and/or operations described herein, including interaction between and/or cooperation with each other”). Thus, claims 12 and 15 are rejected for the same reasons of anticipation as claims 2 and 10 respectively.
Claims 16-20 corresponds to claims 1, 7-10 respectively, additionally reciting a system comprising: one or more processors (Stervik, paragraph [0066], “As one skilled in the art would understand, the processing unit 100, cameras 110, illumination units 115, sensors 120, navigation unit 130, display unit 140, data input unit 150, communication unit 160, memory unit 180, and any other system, unit, or device described herein may individually, collectively, or in any combination comprise appropriate circuitry, such as one or more appropriately programmed processors (e.g., one or more microprocessors including central processing units (CPU)) and associated memory, which may include stored operating system software and/or application software executable by the processor(s) for controlling operation thereof and for performing the particular algorithms represented by the various functions and/or operations described herein, including interaction between and/or cooperation with each other”). Thus, Claims 16-20 are rejected for the same reasons of anticipation as claims 1, 7-10 respectively.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Stervik (US 20170372147 A1) in view of Kim (US 20140300623 A1).
Regarding claim 3, Stervik disclose the method of claim 1.
Stervik does not teach “wherein the generating of the local map includes determining to omit, from the local map, one or more color values of one or more pixels of the image data that do not belong to a segmented navigable space”.
However, Kim discloses wherein the generating of the local map includes determining to omit, from the local map, one or more color values of one or more pixels of the image data that do not belong to a segmented navigable space (Kim, paragraph [0080], “That is, as illustrated in FIG. 7, when the object information extracted from the numerical map is a road, the control unit 30 extracts the lane components from the road area in the photomap and removes unwanted vehicles from the road by applying the road color to the road area except the lane components, so that the resultant photomap from which the vehicles are removed is displayed as the moving body is moving in block S170.”, this technique removes the color of the unwanted objects and filling in their positions with the color of where they reside. It should be noted the examiner is using the concept of removing unwanted objects from an image in their modification and is not limited to the specific technique of removing unwanted vehicles on a road).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to remove undesired objects from Stervik’s stitched image, as taught by Kim.
The suggestion/motivation for doing so would have been to improve the visibility of the display, allowing the driver to concentrate on what they need to.
Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Stervik in view of Kim to obtain the invention as specified in claim 3.
Claim 13 corresponds to claim 3, additionally reciting the one or more processors of claim 11, the one or more circuits (Stervik, paragraph [0066], “As one skilled in the art would understand, the processing unit 100, cameras 110, illumination units 115, sensors 120, navigation unit 130, display unit 140, data input unit 150, communication unit 160, memory unit 180, and any other system, unit, or device described herein may individually, collectively, or in any combination comprise appropriate circuitry, such as one or more appropriately programmed processors (e.g., one or more microprocessors including central processing units (CPU)) and associated memory, which may include stored operating system software and/or application software executable by the processor(s) for controlling operation thereof and for performing the particular algorithms represented by the various functions and/or operations described herein, including interaction between and/or cooperation with each other”). Thus, claim 13 is rejected for the same reasons of obviousness as claim 3.
Claim(s) 4 are rejected under 35 U.S.C. 103 as being unpatentable over Stervik (US 20170372147 A1) in view of Chen (US 11225193 B2).
Regarding claim 4, Stervik discloses the method of claim 1.
Stervik does not teach “wherein the generating of the local map includes sizing a dimension of the ground projection based at least on a speed of the ego-machine”.
However, Chen teaches wherein the generating of the local map includes sizing a dimension of the ground projection based at least on a speed of the ego-machine (Chen, Col. 6, Line 51-55, " Through the surround view system 1, a synthetic scene can be generated taking into consideration both the vehicle turning angle and the vehicle speed, and thus the viewport transformation can be optimized to enlarge the area where drivers expect to see clearly").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Stervik’s camera’s zoom based on the vehicle speed and angle, as taught by Chen.
The suggestion/motivation for doing so would have been to allow the driver to have a better view of where they are turning and heading to, resulting in better safety and caution.
Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Stervik in view of Chen to obtain the invention as specified in claim 4.
Claim 14 corresponds to claim 4, additionally reciting the one or more processors of claim 11, the one or more circuits (Stervik, paragraph [0066], “As one skilled in the art would understand, the processing unit 100, cameras 110, illumination units 115, sensors 120, navigation unit 130, display unit 140, data input unit 150, communication unit 160, memory unit 180, and any other system, unit, or device described herein may individually, collectively, or in any combination comprise appropriate circuitry, such as one or more appropriately programmed processors (e.g., one or more microprocessors including central processing units (CPU)) and associated memory, which may include stored operating system software and/or application software executable by the processor(s) for controlling operation thereof and for performing the particular algorithms represented by the various functions and/or operations described herein, including interaction between and/or cooperation with each other”). Thus, claim 14 is rejected for the same reasons of obviousness as claim 4.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WAYNE ZHANG whose telephone number is (571) 272-0245. The examiner can normally be reached Monday-Friday 10:00-6:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WAYNE ZHANG/Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672