Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed January 9th, 2026 has been entered. Claims 1 and 3-21 remain pending in the application. Applicant's amendments to the Specification have overcome each and every objection previously set forth in the Non-Final office Action mailed November 7th, 2025.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 5, 14-19, and 21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang (United States Patent Application Publication 20210316669 A1) in view of Xiao et al. (United States Patent Application Publication 20190311487 A1), hereinafter Xiao.
Regarding claim 1, Wang teaches a method for automatically calibrating a sensor system associated with a device under test ([0032] Calibration using the polyhedral sensor target transform vehicle sensors from an uncalibrated state to a calibrated state, and improve runtime-efficiency, space-efficiency, comprehensiveness of calibration, and consistency of vehicle sensor calibration over prior calibration techniques.), comprising:
capturing a three-dimensional image of the device under test ([0118] The scene surveying system 610 captures visual and/or range data of at least a subset of the dynamic scene calibration environment 600, including the motorized turntable 410, at least some of the targets, and the vehicle 102 itself.; [0171] For some additional context on extrinsic calibration, all sensors' extrinsic properties may include relative positions in X, Y, and/or Z dimensions, as well as roll, pitch, and/or yaw...Sensors of the vehicle 102 and scene surveying system 610);
configuring a calibration target system for cooperating with the identified device under test; and calibrating the sensor system via the configured calibration target system ([0118] Data captured by the scene surveying system 610 can also be sent to the vehicle 102 and used to verify the data captured by the sensors and the intrinsic and extrinsic calibrations performed based on this data.).
Wang fails to teach the method as identifying a make and model of the device under test based upon the captured three-dimensional image;
However, Xiao teaches a method as identifying a make and model of the device under test based upon the captured three-dimensional image ([0039] For instance, the enhanced three-dimensional information may be generated based on insertion of a three-dimensional point cloud model of a vehicle into a three-dimensional point cloud model of the scene, and the three-dimensional point cloud model of the vehicle within the three-dimensional point cloud model of the scene may be labeled with information that identifies the vehicle, such as the year, make, and model the vehicle.);
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Wang to comprise the system to identify the make and model of a vehicle similar to Xiao, with a reasonable expectation of success. This would have the predictable result of further identifying the object being scanned both in the celebration sensor, as well as being a generally accepted method in the art for vehicle identification when scanning vehicles.
Regarding claim 3, Wang, as modified above, teaches the method of claim 1, wherein said capturing the three-dimensional image comprises:
rotating the device under test relative to an imaging circuit ([0103] The dynamic scene calibration environment 400 of FIG. 4 includes a motorized turntable 405 with a platform 420 that rotates about a base 425); and
capturing a three-dimensional registered point cloud image of the rotated device under test via the imaging circuit ([0107] since distance measurement sensors such as lidar typically provide a point cloud of depth measurements; [0118] The dynamic scene calibration environment 600 of FIG. 6 also includes a scene surveying system 610, which may include one or more visual cameras, IR cameras (or other IR sensors), one or more distance measurement sensors (e.g., radar, lidar, sonar, sodar, laser rangefinder),).
Regarding claim 5, Wang, as modified above, teaches the method of claim 1, wherein said identifying the device under test comprises:
extracting at least one device marker from the captured three-dimensional image; and identifying the device under test based upon the extracted device marker ([0128] In some cases, feature tracking and/or image recognition techniques applied using the computing device 784 may be used with the a camera and/or the radar, lidar, sonar, sodar, laser rangefinder, and/or other sensors 782 of the scene surveying system 610 to identify when the platform 420 has no vehicle 102 on it, when the vehicle 102 is on the platform 420, when the vehicle 102 in a defined position on the platform (e.g., optionally as guided by guide railings on the platform), when the platform 420 and/or the vehicle 102 have begun rotating from a stopped position, and/or when the platform 420 and/or the vehicle 102 have stopped rotating.).
Regarding claim 14, Wang, as modified above, teaches the method of claim 1, wherein said configuring the calibration target system includes:
creating a virtual sensor calibration environment by disposing a virtual calibration target device adjacent to a virtual device under test via a processing circuit, the virtual device under test being associated with a virtual sensor system and comprising a model of the device under test ([0100] While the thoroughfare 305 of the hallway calibration environment 300 of FIG. 3 is a straight path, in some cases it may be a curved path, and by extension the left target channel 310 and right target channel 315 may be curved to follow the path of the thoroughfare 305.);
simulating an extrinsic calibration process for the virtual sensor system via the created virtual sensor calibration environment ([0099] The sensor targets illustrated in FIG. 3 are illustrated such that some are positioned closer to the thoroughfare 305 while some are positioned farther from the thoroughfare 305. Additionally, while some targets in FIG. 3 are facing a direction perpendicular to the thoroughfare 305, others are angled up or down with respect to the direction perpendicular to the thoroughfare 305.);
adjusting at least one three dimensional position attribute of the virtual calibration target device via the processing circuit based upon said simulating the extrinsic calibration process ([0099] While the sensor targets illustrated in FIG. 3 all appear to be at the same height and all appear to not be rotated about an axis extending out from the surface of the target, it should be understood that the sensor targets may be positioned at different heights and may be rotated about an axis extending out from the surface of the target as in the targets of FIGS. 4, 5A, and 5B.); and
configuring the calibration target system by disposing a calibration target device associated with the calibration target system relative to the device under test in accordance with the at least one adjusted three dimensional position attribute of the virtual calibration target device ([0099] Together, the distance from the thoroughfare 305, the direction faced relative to the thoroughfare 305, the clustering of targets, the height, and the rotation about an axis extending out from the surface of the target may all be varied and modified to provide better intrinsic and extrinsic calibration. That is, these variations assist in intrinsic calibration in that collection of data with representations of targets in various positions, rotations, and so forth ensures that targets are recognized as they should be by any sensor, even in unusual positions and rotations, and that any necessary corrections be performed to data captured by sensors after calibration.).
Regarding claim 15, Wang, as modified above, teaches the method of claim 1, wherein said calibrating the sensor system comprises calibrating an Advanced Driver Assistance (ADAS) sensor system or an Autonomous Vehicle (AV) sensor system disposed on a vehicle via the configured calibration target system ([0034] The autonomous vehicle 102; [0103] In FIG. 4, the illustrated targets are all checkerboard-patterned camera calibration targets 200A as depicted in FIG. 2A, allowing for calibration of cameras of the vehicle 102.).
Regarding claim 16, Wang teaches a computer program product for automatically calibrating a sensor system associated with a device under test, the computer program product being encoded on one or more non-transitory machine-readable storage media ([0032] Calibration using the polyhedral sensor target transform vehicle sensors from an uncalibrated state to a calibrated state, and improve runtime-efficiency, space-efficiency, comprehensiveness of calibration, and consistency of vehicle sensor calibration over prior calibration techniques. [0209] Storage device 1530 can be a non-volatile and/or non-transitory and/or computer-readable memory device) and comprising:
instruction for capturing a three-dimensional image of the device under test ([0118] The scene surveying system 610 captures visual and/or range data of at least a subset of the dynamic scene calibration environment 600, including the motorized turntable 410, at least some of the targets, and the vehicle 102 itself.; [0171] For some additional context on extrinsic calibration, all sensors' extrinsic properties may include relative positions in X, Y, and/or Z dimensions, as well as roll, pitch, and/or yaw...Sensors of the vehicle 102 and scene surveying system 610);
instruction for configuring a robotic calibration target system for cooperating with the identified device under test; and instruction for calibrating the sensor system via the configured calibration target system ([0118] Data captured by the scene surveying system 610 can also be sent to the vehicle 102 and used to verify the data captured by the sensors and the intrinsic and extrinsic calibrations performed based on this data.)
Wang fails to teach the instruction for identifying a make and model of the device under test based upon the captured three-dimensional image;
However, Xiao teaches the instruction for identifying a make and model of the device under test based upon the captured three-dimensional image ([0039] For instance, the enhanced three-dimensional information may be generated based on insertion of a three-dimensional point cloud model of a vehicle into a three-dimensional point cloud model of the scene, and the three-dimensional point cloud model of the vehicle within the three-dimensional point cloud model of the scene may be labeled with information that identifies the vehicle, such as the year, make, and model the vehicle.);
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Wang to comprise the system to identify the make and model of a vehicle similar to Xiao, with a reasonable expectation of success. This would have the predictable result of further identifying the object being scanned both in the celebration sensor, as well as being a generally accepted method in the art for vehicle identification when scanning vehicles.
Regarding claim 17, Wang teaches a system for automatically calibrating a sensor system associated with a device under test ([0032] Calibration using the polyhedral sensor target transform vehicle sensors from an uncalibrated state to a calibrated state, and improve runtime-efficiency, space-efficiency, comprehensiveness of calibration, and consistency of vehicle sensor calibration over prior calibration techniques.), comprising:
a central turntable system for rotating the device under test ([0103] The dynamic scene calibration environment 400 of FIG. 4 includes a motorized turntable 405 with a platform 420 that rotates about a base 425);
an articulated robotic calibration target system having an end effector member for coupling with a calibration target device ([0127] The targets and/or support structures 720 may in some cases be motorized, and as such, the target control system 770 may include motors and actuators 774 that it can use to move the targets);
first and second imaging circuits being configured for capturing a three-dimensional image of the device under test as rotated by said turntable system, said first and second imaging circuits and said robotic calibration target system being disposed around a periphery of said turntable system ([0118] The scene surveying system 610 captures visual and/or range data of at least a subset of the dynamic scene calibration environment 600, including the motorized turntable 410, at least some of the targets, and the vehicle 102 itself.; [0171] For some additional context on extrinsic calibration, all sensors' extrinsic properties may include relative positions in X, Y, and/or Z dimensions, as well as roll, pitch, and/or yaw...Sensors of the vehicle 102 and scene surveying system 610; [Fig. 6]); and
a control circuit for identifying the device under test based upon the three-dimensional image ([0118] The scene surveying system 610 captures visual and/or range data of at least a subset of the dynamic scene calibration environment 600, including the motorized turntable 410, at least some of the targets, and the vehicle 102 itself.) and
configuring said robotic calibration target system for cooperating with the identified device under test, wherein the sensor system is calibrated via said configured calibration target system ([0118] Data captured by the scene surveying system 610 can also be sent to the vehicle 102 and used to verify the data captured by the sensors and the intrinsic and extrinsic calibrations performed based on this data.)
Wang fails to teach the circuit identifying the make and model of the device
However, Xiao teaches the circuit identifying the make and model of the device ([0039] For instance, the enhanced three-dimensional information may be generated based on insertion of a three-dimensional point cloud model of a vehicle into a three-dimensional point cloud model of the scene, and the three-dimensional point cloud model of the vehicle within the three-dimensional point cloud model of the scene may be labeled with information that identifies the vehicle, such as the year, make, and model the vehicle.)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Wang to comprise the system to identify the make and model of a vehicle similar to Xiao, with a reasonable expectation of success. This would have the predictable result of further identifying the object being scanned both in the celebration sensor, as well as being a generally accepted method in the art for vehicle identification when scanning vehicles.
Regarding claim 18, Wang teaches the system of claim 17, wherein said articulated robotic calibration target system has between three and nine degrees of freedom and includes at least one rotational joint member, at least one prismatic joint member or both ([0127] The targets and/or support structures 720 may in some cases be motorized, and as such, the target control system 770 may include motors and actuators 774 that it can use to move the targets, for example as requested by the vehicle 102 to optimize calibration. For example, the target support structures may include a robotic arm with ball joints and/or hinge joints that may be actuated using the motors and actuators 774 to translate a target in 3D space and/or to rotate a target about any axis.).
Regarding claim 19, Wang teaches the system of claim 17, wherein each of said first and second imaging circuits is selected from an imaging circuit group consisting of a camera imaging circuit, a Light Detection and Ranging (LiDAR) imaging circuit, a Radio Detection and Ranging (RADAR) imaging circuit and an ultrasonic imaging circuit ([0128] The scene surveying system 610 includes a surveying device support structure 780, such as a tripod or any other structure discussed with respect to the target support structure 772, and one or more sensors 782 coupled to the support structure 780. The sensors 782 of the scene surveying system 610, like the sensors 180 of the vehicle 102, may include one or more cameras of any type (e.g., wide-angle lens, fisheye lens), one or more distance measurement sensors (e.g., radar, lidar, emdar, laser rangefinder, sonar, sodar), one or more infrared sensors, one or more microphones, or some combination thereof.).
Regarding claim 21, Wang teaches the method of claim 1, further comprising extracting data about one or more device components of the device under test from the captured three-dimensional image and measuring at least one dimension of the device components from the extracted data ([0171] For some additional context on extrinsic calibration, all sensors' extrinsic properties may include relative positions in X, Y, and/or Z dimensions, as well as roll, pitch, and/or yaw.),
wherein said configuring the calibration target system includes dynamically adjusting a position and an orientation of the calibration target system based upon the at least one measured dimension ([0176] In some cases, either sensor data capture by the sensors of the vehicle 102 or rotation of the platform 420 of the motorized turntable 405 may automatically begin once the pressure sensors identify that the vehicle 102 is on the platform 420 and/or once sensors identify that the vehicle 102 has stopped moving (e.g., IMU of the vehicle 102, regional pressure sensors of regions of the turntable platform 420 surface, scene surveying system 610 camera, or some combination thereof).).
Claims 4 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Xiao, further in view of Lee et al. (United States Patent Application Publication 20150054918 A1), hereinafter Lee.
Regarding claim 4, Wang, as modified above, teaches the method of claim 3,
Wang fails to teach the method wherein said capturing the three-dimensional registered point cloud image comprises: recording a plurality of encoder angles from a turntable system for rotating the device under test; capturing a sequence of image frames of the rotating device under test; generating a plurality of rigid body transforms from the captured image frames and the respective recorded encoder angles; and combining the generated rigid body transforms into a preselected coordinate system to provide the three-dimensional registered point cloud image.
However, Lee teaches the method wherein said capturing the three-dimensional registered point cloud image comprises:
recording a plurality of encoder angles from a turntable system for rotating the device under test ([0026] In detail, the processing unit 150 is further coupled to the rotary platform 130, and controls the rotary platform 130 to rotate the 3D object 10 to a plurality of orientations about the rotating axis A1.);
capturing a sequence of image frames of the rotating device under test ([0026] In this way, each time when the rotary platform 130 rotates the 3D object 10 by a predetermined angle, the image capturing unit 140 captures the object contour image of the object shadow 20 from the screen 120.);
generating a plurality of rigid body transforms from the captured image frames and the respective recorded encoder angles; and combining the generated rigid body transforms into a preselected coordinate system to provide the three-dimensional registered point cloud image ([0026] The above step is repeated to obtain the object contour images of the 3D object 10 at various angles, and the processing unit 150 is used to convert the object contour images into the object contour lines in plane coordinates, and correspond the object contour lines to the coordinates of the orientations, so as to build the digital 3D model related to the 3D object 10.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Wang to comprise the encoded angle turntable with combined frame capture similar to Lee, with a reasonable expectation of success. This would have the predictable result of generating a three-dimensional image of the object under test with limited memory use to conserve storage space and power.
Regarding claim 20, Wang, as modified above, teaches the system of claim 17,
Wang fails to teach the system wherein said first imaging circuit and said robotic calibration target system are disposed in a first plane that passes through a central region of said turntable system, and wherein said second imaging system is disposed in a second plane that is normal to the first plane and that passes through the central region of said turntable system.
However, Lee teaches the system wherein said first imaging circuit and said robotic calibration target system are disposed in a first plane that passes through a central region of said turntable system, and wherein said second imaging system is disposed in a second plane that is normal to the first plane and that passes through the central region of said turntable system ([0032] In this way, the processing unit 150 can build the digital 3D model related to the 3D object 10 having the recess portion 12 according to the object contour images captured by the image capturing unit 140 and the grey level image captured by the auxiliary image capturing unit 160.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Wang to comprise the two plane imaging and robotic system similar to Lee, with a reasonable expectation of success. This would have the predictable result of combining the need for multiple dimensional views of an object combined with the target system that calibrates sensors with a full field of view imaging system to ensure the AV sensor system is fully calibrated for a real-world environment.
Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Xiao, further in view of Green et al. (United States Patent Application Publication 20210070286 A1), hereinafter Green.
Regarding claim 6, Wang, as modified above, teaches the method of claim 5,
Wang fails to teach the method wherein said extracting the at least one device marker includes extracting a device mirror, a device bumper, a device wheel, a center of the device wheel, a center of device axle, a device logo, a device thrust line, a device door, a device pillar from the captured three-dimensional image.
However, Green teaches the method wherein said extracting the at least one device marker includes extracting a device mirror, a device bumper, a device wheel, a center of the device wheel, a center of device axle, a device logo, a device thrust line, a device door, a device pillar from the captured three-dimensional image ([0049] The vehicle system may determine the specification information of the nearby vehicle based on one or more features (e.g., a vehicle shape, a body part, a bumper, a head light, a corner, a logo, a back profile, a side profile, a blurred profile, a front profile, a window shape, a vehicle shape, a text indicator, a sign, etc.) associated with that vehicle.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Wang to comprise the device marker identification comprising the proposed list similar to Green, with a reasonable expectation of success. This would have the predictable result of using distinguishing features of the vehicle to quickly identify and configure the sensor calibration test objects.
Regarding claim 7, Wang, as modified above, teaches the method of claim 5,
Wang fails to teach the method wherein said extracting the at least one device marker comprises: training a machine learning framework with device data for a plurality of different types of devices under test; and extracting the at least one device marker from the captured three-dimensional image via the trained machine learning framework.
However, Green teaches the method wherein said extracting the at least one device marker comprises: training a machine learning framework with device data for a plurality of different types of devices under test; and extracting the at least one device marker from the captured three-dimensional image via the trained machine learning framework ([0049] In particular embodiments, the vehicle system may use a computer vision algorithm or a machine-learning (ML) model to determine the vehicle manufacturer, type, and model based on an image including at least a part of the vehicle. For example, the vehicle system may capture an image including the bumper of a nearby truck, determine the truck's manufacturer, type, and model based on that bumper image, and determine the specification information (e.g., a length, a width, a height, a weight, etc.) based on the truck's manufacturer and model. As another example, the vehicle system may use a computer vision algorithm or a machine-learning (ML) model to recognize a vehicle manufacturer and model based on an image including a side profile of that vehicle and determine the vehicle specification information.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Wang to comprise the machine learning training framework similar to Green, with a reasonable expectation of success. This would have the predictable result of using machine learning techniques to expedite the automatic identification process.
Claims 8-13 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Xiao, further in view of Cantadori et al. (United States Patent Application Publication 20190392610 A1), hereinafter Cantadori.
Regarding claim 8, Wang, as modified above, teaches the method of claim 1,
Wang fails to teach the method wherein said identifying the device under test includes identifying the sensor system based upon the captured three-dimensional image.
However, Cantadori teaches the method wherein said identifying the device under test includes identifying the sensor system based upon the captured three-dimensional image ([0067] The stated technical task and specified objects are substantially achieved by a method of calibrating an optical sensor mounted on board of a vehicle, comprising the steps of: [0068] positioning the vehicle in a test station; [0069] arranging a projection surface for images or videos in front of said test station; [0070] identifying the type of optical sensor;).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Wang to comprise the identification of the device under test by a three-dimensional image similar to Cantadori, with a reasonable expectation of success. This would have the predictable result of utilizing the captured three-dimensional image to identify the device to be tested.
Regarding claim 9, Wang, as modified above, teaches the method of claim 1,
Wang fails to teach the method wherein said configuring the calibration target system includes: selecting a calibration target device with calibration indicia suitable for calibrating the sensor system of the identified device under test; disposing the selected calibration target device on a calibration target positioning system of the calibration target system; and establishing at least one position attribute of the selected calibration target device relative to the sensor system via the calibration target positioning system.
However, Cantadori teaches wherein said configuring the calibration target system includes: selecting a calibration target device with calibration indicia suitable for calibrating the sensor system of the identified device under test ([0132] The control unit 5 searches inside the memory 6 for the image (in the former case) or the video (in the latter case) associated with the type of optical sensor 2 to be calibrated.);
disposing the selected calibration target device on a calibration target positioning system of the calibration target system; and establishing at least one position attribute of the selected calibration target device relative to the sensor system via the calibration target positioning system ([0125] Once the type of optical sensor 2 has been identified, the control unit 5 (in the scan tool 20) can determine the spatial measurement position that the monitor must assume with respect to the optical sensor 2 during calibration.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Wang to comprise the selecting calibration targets and locating them based on system requirements similar to Cantadori, with a reasonable expectation of success. This would have the predictable result of ensuring the sensor is properly calibrated based on the needs and standards of the individual system identified.
Regarding claim 10, Wang, as modified above, teaches the method of claim 9,
Wang fails to teach the method wherein said selecting the calibration target device comprises selecting the calibration target device from a plurality of calibration target devices with different calibration indicia.
However, Cantadori teaches the method wherein said selecting the calibration target device comprises selecting the calibration target device from a plurality of calibration target devices with different calibration indicia ([0132] The control unit 5 searches inside the memory 6 for the image (in the former case) or the video (in the latter case) associated with the type of optical sensor 2 to be calibrated.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Wang to comprise the selection of calibration targets from a list of options similar to Cantadori, with a reasonable expectation of success. This would have the predictable result of utilizing a known collection of calibration targets to adequately test the sensor system under test.
Regarding claim 11, Wang, as modified above, teaches the method of claim 9, wherein said establishing the at least one position attribute of the selected calibration target device includes translating in a radial direction and rotating in three dimensions the calibration target positioning system relative to the device under test ([0107] As the vehicle 102 rotates about the base 425 on the platform 420 of the motorized turntable 405, and/or during stops between rotations, the vehicle 102 and its computer 110 can detect the combined range/camera extrinsic calibration targets 250 using both its distance measurement sensors).
Regarding claim 12, Wang, as modified above, teaches the method of claim 9, wherein said configuring the calibration target system includes determining a travel path for transitioning the calibration target system into the at least one established position attribute while avoiding a collision between the selected calibration target device and the device under test ([0100] While the thoroughfare 305 of the hallway calibration environment 300 of FIG. 3 is a straight path, in some cases it may be a curved path, and by extension the left target channel 310 and right target channel 315 may be curved to follow the path of the thoroughfare 305.).
Regarding claim 13, Wang, as modified above, teaches the method of claim 12, wherein said determining the travel path comprises solving forward and inverse kinematics of the calibration target system and a turntable system for rotating the device under test ([0100] While the thoroughfare 305 of the hallway calibration environment 300 of FIG. 3 is a straight path, in some cases it may be a curved path, and by extension the left target channel 310 and right target channel 315 may be curved to follow the path of the thoroughfare 305.; [0103] The dynamic scene calibration environment 400 of FIG. 4 includes a motorized turntable 405 with a platform 420 that rotates about a base 425. In some cases, the platform 420 may be raised above the floor/ground around the turntable 405, with the base 425 gradually inclined up to enable the vehicle 102 to drive up the base 425 and onto the platform 420, or to drive off of the platform 420 via the base 425.).
Response to Arguments
Applicant's arguments filed January 9th, 2026 have been fully considered but they are not persuasive.
Regarding the applicants argument that the prior art of record fails to teach the amended claim limitation of using the sensor to identify the make and model of a vehicle, the examiner admits that the prior art is deficient in this matter only because the limitation is not present in former versions of the claims as written. Pursuit to the amended claims, new prior art has been entered, necessitated by the amendment, that reads on such a sensor, with reasons for obviousness to combine given above.
Further, regarding the applicant’s argument that the prior art of Wang fails to teach the sensors as rendering a three-dimensional image, the examiner points to the citation of Wang previously cited, in which the scene surveying system is noted as including a plurality of cameras, including a lidar, radar, and rangefinder. The combination of these cameras, as well as the description of the scene teach a three dimensional rendered image is being captured. Further, as included in amended claim section, Wang further clarifies that this and other sensors scan the targets in three-dimensions. As such, the prior art is maintained and amended as necessitated by the amended limitations, and the rejection is maintained in this Final Office Action.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT WILLIAM VASQUEZ JR whose telephone number is (571)272-3745. The examiner can normally be reached Monday thru Thursday, Flex Friday, 8:00-5:00 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HELAL ALGAHAIM can be reached at (571)270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT W VASQUEZ/Examiner, Art Unit 3645
/HELAL A ALGAHAIM/SPE , Art Unit 3645