DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments and amendments filed in the Amendment filed February 18, 2026 (herein “Amendment”) regarding the rejection of claims 1–15 under 35 U.S.C. 103 have been fully considered but they are not persuasive. In their amendments, Applicant has added subject matter from claim 4 into the independent claims and argued against the rejection of record against these limitations from previously pending claim 4. Specifically, Applicant appears to set forth on page 12 of the Amendment that cited reference Liu, teaches fields of view of sensor data overlapping but that this is not the same as the claimed “generate the first pseudo visual field image data such that the first visual pseudo visual field and a second visual field include a portion overlapping each other,” in that Liu’s sensor data is not “image data.” However, Liu teaches in ¶63 that the sensor data comprises visual image data, and therefore, Applicants remarks are not persuasive. On pages 12–13 of the Amendment, Applicant states Liu does not disclose “a first visual field of the first sensor and the second visual field of the second senor do not overlap each other,” however, Liu in ¶60 teaches that pre-alignment, the sensor data between two sensors (and thus their respective visual fields) do not overlap, and therefore, Liu does teach the claimed “a first visual field of the first sensor and the second visual field of the second senor do not overlap each other.” Accordingly, while all of Applicant’s amendments and arguments have been fully considered, they are not persuasive, and the rejection against the remaining pending claims 1–3, and 5–15 is maintained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1–3, 5–15 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al., US Patent Application Publication No. US 2021/0157316 A1 (herein “Liu”) in view of Adams et al., US Patent Application Publication No. US 2019/0049566A1 (herein “Adams”) further in view of Osborne, US Patent No. 9,843,723 B2 (herein “Osborne”).
Regarding claim 1, where deficiencies of Liu are noted in square brackets [], Liu teaches an image generation apparatus for generating an image of a subject using image data of the subject acquired by a sensor, the image generation apparatus comprising (Liu ¶¶22–24, and 27, autonomous vehicle with a computing device receiving generated sensor data from sensors such as a depth camera, and an image sensor (camera), capturing an object in the vicinity of the vehicle):
a controller configured to (Liu ¶¶22–25, computing device of an autonomous vehicle):
acquire first image data of the subject acquired by a first sensor (Liu ¶¶23–24, sensor data received (acquired) by a computing device from one of multiple sensors 104, including an image sensor (camera) therefore the sensor data being image data),
acquire second image data of the subject acquired by a second sensor (Liu ¶¶23–24, sensor data received (acquired) by a computing device from one of multiple sensors 104, including another (second) image sensor (camera) therefore the sensor data being image data),
generate a first pseudo visual field of the first sensor based on acquiring image data (Liu ¶¶29–30, 32–33 vehicle generates a map environment representation as a mesh or 3D version representation of the environment, including poses (orientations) of objects and their estimated trajectories, from the sensor data received, where the sensor data and perspectives may change per calibration techniques discussed by U.S. patent application serial no. 15/674853 (corresponding to Adams)) [by changing a relative position between the subject and the first sensor],
generate, based on the first image data, first pseudo visual field image data acquired by the first sensor in the first pseudo visual field (Liu ¶¶27, 29, 32–33 from the map environment representation as a mesh or 3D version representation of the environment, including poses of objects and their estimated trajectories, data is determined of the object data and it’s “track” which comprises a predicted object position, velocity, acceleration and heading), and
perform alignment between the first pseudo visual field image data and the second image data (Liu ¶¶50, 60, mapping component is downstream of sensors, and the sensor processing stages including the perception component and the planning component that determine the object track and environment representation, and from this data, determines sensor data alignments),
[wherein a first visual field of the first sensor is smaller than the generated first pseudo visual field,
wherein the second sensor is disposed in the first pseudo visual field of the first sensor], and
wherein the controller is configured to generate the first pseudo visual field image data such that the first pseudo visual field and a second visual field include a portion overlapping each other (Liu ¶¶60, aligning the sensor data from two sensor data 312 and 316 (thus including second image data) comprises perturbing the pose data (from environment representation) until the sensor data associated with various objects overlaps and/or is continuous (coincide with each other)) and a first visual field of the first sensor and the second visual field of the second sensor do not overlap each other (Liu ¶¶60, depicting the sensor data pre-alignment, showing that the first and second sensor data 312 and 316 do not overlap).
Although Liu discloses that the sensor 104 could be multiple sensors, and that any one sensor could be from a set of various sensors including an image sensor (camera), Liu does not explicitly teach the claimed combination of both sensors being an image sensor. Therefore, Liu does not anticipate claim 1. However, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the vehicle of Liu to include multiple image sensors, at least because doing so would be obvious to try – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success. See MPEP §2143(I)(E).
Further, while Liu does teach that sensor data and perspectives may change per calibration techniques discussed by U.S. patent application serial no. 15/674853 (corresponding to Adams), nonetheless, Liu does not explicitly teach the claimed by changing a relative position between the subject and the first sensor. Liu also does not explicitly teach the claimed wherein a first visual field of the first sensor is smaller than the generated first pseudo visual field.
Still further, Liu does not explicitly teach “and wherein the second sensor is disposed in the first pseudo visual field of the first sensor.”
Adams teaches by changing a relative position between the subject and the first sensor (Adams ¶¶16, 36, 50, data collected when the vehicle is in a region of interest with a central location (the subject) and the region of interest includes a time window centered around the region of interest given a designated speed of the vehicle, where the time window includes a fixed number of seconds before and after the central location).
Adams also teaches the claimed wherein a first visual field of the first sensor is smaller than the generated first pseudo visual field (Adams fig. 5, ¶¶77–80, input from a sensor received at a fixed interval 510 including the visual data from three locations along a path from the same vehicle, and thus the fixed interval sensor input (generated first pseudo visual field) includes a wider field of view (along voxel space grid 506) than from anyone individual location point 508 (first visual field of the first sensor)).
Osborne teaches and wherein the second sensor is disposed in the first pseudo visual field of the first sensor (Osborne col. 18, ll. 7–31, col. 9, ll. 50 – col. 10, l. 3, single virtual field of view is an extended view from the central camera 112 (first pseudo visual field), the single virtual field of view including the field of view according to angle d, where fig. 1D illustrates angle D including in its view a second camera 116e).
Therefore, taking the teachings of Liu and Adams together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the tracked object data from the map environment of Liu to be smaller, by way of having sensor data from a fixed interval including multiple positions as disclosed in Adams at least because doing so would provide a way to calibrate vehicle sensors without requiring the sensors to be taken offline. See Adams ¶¶1 and 18.
Further, taking the teachings of Liu and Osborne together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the map environment of Liu to be wider virtually, with additional cameras as disclosed in Adams at least because doing so would provide a parallax-free spherical images just by stitching together images. See Osborne col. 5, l. 65 – col. 6, l. 12.
Regarding claim 2, Liu teaches wherein the controller is configured to perform the alignment such that coordinate systems of portions of the first pseudo visual field image data and the second image data that overlap each other coincide with each other (Liu ¶¶60, aligning the sensor data from two sensor data 312 and 316 (thus including second image data) comprises perturbing the pose data (from environment representation) until the sensor data associated with various objects overlaps and/or is continuous (coincide with each other), where ¶10 teaches that the sensor data is represented as points in a coordinate system such as Cartesian).
Regarding claim 3, Liu teaches wherein the controller is configured to: specify the first pseudo visual field generated as the first sensor moves and the relative position moves (Liu ¶¶27 and 29, perception component determines a track of an object that comprises a historical, current and/or predicted object position (relative position moves), the perception data being used to determine the environment representation), and generate the first pseudo visual field image data using the first image data acquired by the first sensor in the specified first pseudo visual field (Liu ¶¶29–30, 32, localization data receives sensor data and determines a pose of the vehicle, where the vehicle determines the map comprising an environment representation, the pose being determined by matching the sensor data to the map).
Regarding claim 5, Liu teaches wherein the controller is configured to specify the first pseudo visual field generated as the subject moves and the relative position moves (Liu ¶¶27 and 29, perception component determines a track of an object that comprises a historical, current and/or predicted object position (subject object moves and thus the relative position moves), the perception data being used to determine the environment representation), and generate the first pseudo visual field image data using the first image data acquired by the first sensor in the specified first pseudo visual field (Liu ¶¶29–30, 32, localization data receives sensor data and determines a pose of the vehicle, where the vehicle determines the map comprising an environment representation, the pose being determined by matching the sensor data to the map).
Regarding claim 6, Liu teaches wherein the controller is configured to generate the first pseudo visual field image data by connecting the first image data acquired by the first sensor at a plurality of sampling time points (Liu ¶¶74, 23 and 29–30, sensor data is windowed according to a sampling rate, the sensor data from an image sensor (camera) and used to generate the environment representation map (first pseudo visual field image data)).
Regarding claim 7, Liu teaches wherein the controller is configured to: generate, based on the second image data, second pseudo visual field image data that is able to be acquired by the second sensor in a second pseudo visual field of the second sensor (Liu ¶¶38, 29, and 74, determining based on the sensor data (including a second image sensor data) and a modification a second map comprising an environment representation (second pseudo visual field image data)) brought about by a change in a relative position between the subject and the second sensor (Liu ¶¶27 and 29, sensor data from the sensors used to determine object data and it’s “track” which comprises a predicted object position, velocity, acceleration and heading (change in relative position)), and perform the alignment between the first pseudo visual field image data and the second pseudo visual field image data (Liu ¶¶72–74, second alignment determined based on a modification to a link between poses 520 and 522 from respective environment representation maps).
Regarding claim 8, Liu teaches wherein the controller is configured to generate the second pseudo visual field image data based on the second pseudo visual field brought about by a movement of the second sensor or a movement of the subject (Liu ¶¶13–14, 73, determining an environment representation map by perturbing, where the perturbing includes moving a position of the sensor data or moving a position of a pose of an object (the subject)).
Regarding claim 9, Liu teaches wherein the controller is configured to perform a first alignment between the first pseudo visual field image data and the second image data (Liu ¶¶50, 60, mapping component is downstream of sensors, and the sensor processing stages including the perception component and the planning component that determine the object track and environment representation, and from this data, determines sensor data alignments), and then perform a second alignment between the first image data and the second image data (Liu ¶¶72–73, and 23–24, realignment by down-weighting perturbations of poses respective to sensor data, the sensors being image sensors).
Regarding claim 10, Liu teaches wherein the controller is configured to perform an alignment in a first image region in the first alignment (Liu ¶¶50, 60, mapping component is downstream of sensors, and the sensor processing stages including the perception component and the planning component that determine the object track and environment representation, and from this data, determines sensor data alignments), and perform an alignment in a second image region smaller than the first image region in the second alignment (Liu ¶¶72–73, and 23–24, realignment by down-weighting perturbations of poses respective to sensor data, the sensors being image sensors, where a pose is of an object, which would be a sub-region (smaller) of the field of view (first image region) of both sensors).
Regarding claim 11, Liu teaches wherein the controller is configured to: set an image acquired in advance by the first sensor as an advance preparation image, and generate the first pseudo visual field image data using the advance preparation image instead of or in combination with the first image data (Liu ¶¶23, 31, vehicle stores sensor data, where the sensors can be image sensors, and where the vehicle determines the environment representation map (first pseudo visual field image) based on the data collected from the sensors).
Regarding claim 12, Liu teaches wherein the controller is configured to: set an image generated in advance as an image acquired by the first sensor as an advance preparation image, and generate the first pseudo visual field image data using the advance preparation image instead of or in combination with the first image data (Liu ¶¶23, 31, vehicle stores sensor data, where the sensors can be image sensors, and where the vehicle determines the environment representation map (first pseudo visual field image) based on the data collected from the sensors).
Regarding claim 13, Liu teaches wherein the controller is configured to generate an integrated image obtained by integrating the first image data and the second image data by performing the alignment between the first pseudo visual field image data and the second image data (Liu ¶¶60, sensor data alignment involving determining the sensor data associated with a pose from a first and second sensor, and perturbing an estimated pose associated with the pose until sensor data 312 from one sensor, and sensor data 314 from another sensor overlaps and/or is continuous (integrating)).
Regarding claim 14, Liu teaches an image generation system comprising: the image generation apparatus according to claim 1; the first sensor; and the second sensor (Liu fig. 1, ¶¶23–25, vehicle 102 including computing device 106 and sensors 102).
Regarding claim 15, where deficiencies of Liu are noted in square brackets [], Liu teaches an image generation method for generating an image of a subject by using image data of the subject acquired by a sensor, the image generation method comprising (Liu ¶¶22–24, and 27, operations of an autonomous vehicle with a computing device that receives generated sensor data from sensors such as a depth camera, and an image sensor (camera), capturing an object in the vicinity of the vehicle):
a step of acquiring first image data of the subject acquired by a first sensor (Liu ¶¶23–24, sensor data received (acquired) by a computing device from one of multiple sensors 104, including an image sensor (camera) therefore the sensor data being image data);
a step of acquiring second image data of the subject acquired by a second sensor (Liu ¶¶23–24, sensor data received (acquired) by a computing device from one of multiple sensors 104, including another (second) image sensor (camera) therefore the sensor data being image data);
a step of generating a first pseudo visual field of the first sensor based on acquiring image data (Liu ¶¶29–30, 32–33 vehicle generates a map environment representation as a mesh or 3D version representation of the environment, including poses (orientations) of objects and their estimated trajectories, from the sensor data received, where the sensor data and perspectives may change per calibration techniques discussed by U.S. patent application serial no. 15/674853 (corresponding to Adams)) [by changing a relative position between the subject and the first sensor],
a step of generating, based on the first image data, first pseudo visual field image data acquired by the first sensor in the first pseudo visual field (Liu ¶¶27, 29, 32–33 from the map environment representation as a mesh or 3D version representation of the environment, including poses of objects and their estimated trajectories, data is determined of the object data and it’s “track” which comprises a predicted object position, velocity, acceleration and heading); and
a step of performing alignment between the first pseudo visual field image data and the second image data (Liu ¶¶50, 60, mapping component is downstream of sensors, and the sensor processing stages including the perception component and the planning component that determine the object track and environment representation, and from this data, determines sensor data alignments)
[wherein a first visual field of the first sensor is smaller than the generated first pseudo visual field,
wherein the second sensor is disposed in the first pseudo visual field of the first sensor], and
wherein the controller is configured to generate the first pseudo visual field image data such that the first pseudo visual field and a second visual field include a portion overlapping each other (Liu ¶¶60, aligning the sensor data from two sensor data 312 and 316 (thus including second image data) comprises perturbing the pose data (from environment representation) until the sensor data associated with various objects overlaps and/or is continuous (coincide with each other)) and a first visual field of the first sensor and the second visual field of the second sensor do not overlap each other (Liu ¶¶60, depicting the sensor data pre-alignment, showing that the first and second sensor data 312 and 316 do not overlap).
Although Liu discloses that the sensor 104 could be multiple sensors, and that any one sensor could be from a set of various sensors including an image sensor (camera), Liu does not explicitly teach the claimed combination of both sensors being an image sensor. Therefore, Liu does not anticipate claim 15. However, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the vehicle of Liu to include multiple image sensors, at least because doing so would be obvious to try – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success. See MPEP §2143(I)(E).
Further, while Liu does teach that sensor data and perspectives may change per calibration techniques discussed by U.S. patent application serial no. 15/674853 (corresponding to Adams), nonetheless, Liu does not explicitly teach the claimed by changing a relative position between the subject and the first sensor. Liu also does not explicitly teach the claimed wherein a first visual field of the first sensor is smaller than the generated first pseudo visual field, and wherein the second sensor is disposed in the first pseudo visual field of the first sensor.
Adams teaches by changing a relative position between the subject and the first sensor (Adams ¶¶16, 36, 50, data collected when the vehicle is in a region of interest with a central location (the subject) and the region of interest includes a time window centered around the region of interest given a designated speed of the vehicle, where the time window includes a fixed number of seconds before and after the central location).
Adams also teaches the claimed wherein a first visual field of the first sensor is smaller than the generated first pseudo visual field (Adams fig. 5, ¶¶77–80, input from a sensor received at a fixed interval 510 including the visual data from three locations along a path from the same vehicle, and thus the fixed interval sensor input (generated first pseudo visual field) includes a wider field of view (along voxel space grid 506) than from anyone individual location point 508 (first visual field of the first sensor)).
Osborne teaches and wherein the second sensor is disposed in the first pseudo visual field of the first sensor (Osborne col. 18, ll. 7–31, col. 9, ll. 50 – col. 10, l. 3, single virtual field of view is an extended view from the central camera 112 (first pseudo visual field), the single virtual field of view including the field of view according to angle d, where fig. 1D illustrates angle D including in its view a second camera 116e).
Therefore, taking the teachings of Liu and Adams together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the tracked object data from the map environment of Liu to be smaller, by way of having sensor data from a fixed interval including multiple positions as disclosed in Adams at least because doing so would provide a way to calibrate vehicle sensors without requiring the sensors to be taken offline. See Adams ¶¶1 and 18.
Further, taking the teachings of Liu and Osborne together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the map environment of Liu to be wider virtually, with additional cameras as disclosed in Adams at least because doing so would provide a parallax-free spherical images just by stitching together images. See Osborne col. 5, l. 65 – col. 6, l. 12.
Conclusion
Applicant's amendment necessitated any new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M KOETH whose telephone number is (571)272-5908. The examiner can normally be reached Monday-Thursday, 09:00-17:00, Friday 09:00-13:00, EDT/EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MICHELLE M. KOETH
Primary Examiner
Art Unit 2671
/MICHELLE M KOETH/Primary Examiner, Art Unit 2671