DETAILED ACTION
Claims 1-20 have been examined and are pending.
Claims 1-20 are rejected (Non-Final Rejection).
Notice of AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDSs) submitted on 03/08/22, 04/10/23, 07/10/23, 09/28/23, 08/20/24 (2), 10/23/24 (2), 01/21/25 (2) & 08/20/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDSs have been/are being considered by the examiner.
Related Co-Pending Applications/Patents
Examiner is confirming the Applicant’s notification in the IDS papers regarding commonly owned patent application(s), U.S. Patent Application Nos. 17/654,031 & 17/654,037, which appear to broadly cover and claim some similar/same concepts as claims of the present application. However, a double patenting rejection is not warranted at this time.
Specification
The disclosure is objected to because of the following informalities: Para. [0012] of the specification, in the last line, recites “… which results is a high increase ...”, which appear to be an artifact of Applicant’s editing process. Para. [0012] should be amended to recite: “… which results in [[is]] a high increase ...”.
Claim Objections
Claims 5 and 13 are objected to because of the following informalities:
Claim 5 recites “… determining another potential target of the plurality of potential targets to the first electromagnetic ray information …”, which appears to be meant to convey the “another” potential target is different from the (original) potential target. If the above understanding is correct, Examiner suggests amending claim 5 as follows: “… determining another potential target of the plurality of potential targets, the other potential target being next-closest, from among the potential targets, to the first electromagnetic ray information …”. (Under this alternative option, claim 6 would need to be canceled).
Claim 13 has a substantially similar limitation as recited in claim 5; therefore, it is objected to for the same reason.
Claim Rejections - 35 U.S.C. § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 19 and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claims 19 and 20 recite “computer-readable storage media.” The claims do not fall within at least one of the four categories of patent eligible subject matter because: the computer readable medium is not defined by applicant, therefore is open to a signal embodiment, which is non-statutory. The specification fails to specifically exclude a transitory embodiment.
Claim Rejections - 35 U.S.C. § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5-8, 12-14, 16, 17, 19 and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over KRISTENSEN et al. (U.S. Patent Application Publication No. 2021/0286923 A1) in view of VOLKOV et al. (U.S. Patent Application Publication No. 2023/0104199 A1).
Regarding claim 1, KRISTENSEN discloses a method comprising: receiving first electromagnetic ray information for an electromagnetic ray that is simulated in an environment that includes a plurality of potential targets (simulation system 400A may use real-time ray-tracing, Para. [0080] of KRISTENSEN; See also the LIDAR data, RADAR data, ultrasonic sensor data, image data, and/or other sensor data 102 that is used to derive an input scene configuration to the sensor model 120 (whether for training or in operation) may be generated in a virtual or simulated environment … for example, with respect to a virtual vehicle (e.g., a car, a truck, a water vessel, a construction vehicle, an aircraft, a drone, etc.), the virtual vehicle may include virtual sensors (e.g., virtual cameras, virtual LIDAR, virtual RADAR, virtual SONAR, etc.) that capture simulated or virtual data of the virtual or simulated environment … in addition to or alternatively from real-world data being used to derive an input scene configuration to the sensor model 120, simulated or virtual sensor data may be used and thus included in the sensor data 102, Para. [0063] of KRISTENSEN; See also RADAR target clustering and tracking may be used to determine the associations between RADAR points 312 and objects 306—or bounding shapes 304 or other property or dimension corresponding thereto, Para. [0055] of KRISTENSEN); determining a potential target of the plurality of potential targets that is closest to the first electromagnetic ray information (a LIDAR point 310 closest to a centroid of the bounding shape 304 and/or the cropped bounding shape 308 may be used to determine the final distance value, Para. [0053] of KRISTENSEN; See also RADAR target clustering and tracking may be used to determine the associations between RADAR points 312 and objects 306—or bounding shapes 304 or other property or dimension corresponding thereto, Para. [0055] of KRISTENSEN; See also a list of identified reflections may be pared down to a designated number (e.g., by identifying the designated number of reflections based on some metric such as reflections having the largest reflectivity values, closest range, etc.), the designated reflections and corresponding reflection characteristics may be encoded into corresponding vectors, and the vectors may be concatenated to form a single dimensional input vector … generally, the number of reflections may be selected to match the dimensionality of the input(s) into the sensor model 120 … by way of non-limiting example, a list of 180 reflections, each having values for 5 reflection characteristics (e.g., bearing, elevation, range, velocity, RCS), may be encoded into inputs of the sensor model 120 … as such, in some scenarios where there are fewer detected reflections than there are inputs into the sensor model 120, some of the input values may be null or zero, KRISTENSEN at Para. [0044]); converting the first electromagnetic ray information to second electromagnetic ray information based on the potential target (radio waves reflect off of certain objects and materials, and a RADAR sensor (which may correspond to the origin of the coordinate system in FIG. 2) may detect these reflections and reflection characteristics such as bearing, azimuth, elevation, range (e.g., time of beam flight), intensity, Doppler velocity, RADAR cross section (RCS), reflectivity, SNR, and/or the like … generally, reflections and reflection characteristics may depend on the objects in a scene, speeds, materials, sensor mounting position and orientation, etc. … reflection data may be combined with position and orientation data (e.g., from GNSS and IMU sensors) to generate point clouds … in FIG. 2, each of the RADAR points 212 represents the location of a detected reflection in the world space … collectively, the RADAR points 212 may form a point cloud representing detected reflections in the scene, Para. [0042] of KRISTENSEN; See also conversions between world space locations and corresponding image space locations of LIDAR data may be known, or determined, using intrinsic and/or extrinsic parameters—e.g., after calibration—of the LIDAR sensor(s) and/or the camera(s) that generated the image 302 … as such, because this relationship between world space and image space is known, and because the LIDAR data and the image data may have been captured substantially simultaneously, the LIDAR data distance predictions may be associated with the various objects 306—or their corresponding bounding shapes 304 or other property or dimension—in the image 302, Para. [0048] of KRISTENSEN); determining, based on the second electromagnetic ray information and a pre-calculated acceleration data structure indicative of a geometric profile of the potential target, whether the electromagnetic ray hits a facet of the potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also reflection data may be combined with position and orientation data (e.g., from GNSS and/or IMU sensors) to generate LIDAR point clouds … properties of objects in the scene such as positions or dimensions may be encoded into a suitable representation of a scene configuration … geometric description(s) of a scene may be encoded into a suitable network input(s) … for example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN); and determining whether to calculate an electromagnetic response of the potential target based on whether the electromagnetic ray hits a facet of the potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also a sensor model may learn to predict virtual sensor data from a representation of a scene configuration, Para. [0027] of KRISTENSEN).
Although KRISTENSEN teaches ray casting and ray tracing (Paras. [0002] & [0003]), KRISTENSEN appears to fail to explicitly disclose the first electromagnetic ray information including a starting point and direction of the electromagnetic ray relative to a global coordinate system of the environment and the second electromagnetic ray information comprising the starting point and direction relative to a local coordinate system of the potential target.
VOLKOV, however, is in the field of ray tracing/casting (Para. [0003] of VOLKOV) and teaches the first electromagnetic ray information including a starting point and direction of the electromagnetic ray relative to a global coordinate system of the environment (ray origin, ray direction, Paras. [0945]-[0947] of VOLKOV; See also global and local coordinate system, Para. [0393] of VOLKOV) and the second electromagnetic ray information comprising the starting point and direction relative to a local coordinate system of the potential target (a coordinate space is calculated which is aligned to the hair direction … for example, the z-axis may be determined to point into the hair direction, while the x and y axes are perpendicular to the z-axis … intersecting a ray with such an oriented bound requires first transforming the ray into the oriented space and then performing a standard ray/box intersection test, Para. [0749] of VOLKOV; See also the bounds of ray packets are stored in in full precision but individual ray points P1-P3 are transmitted as indexed offsets to the bounds … similarly, a plurality of local coordinate systems may be generated which use 8-bit integer values as local coordinates … the location of the origin of each of these local coordinate systems may be encoded using the full precision (e.g., 32-bit floating point) values, effectively connecting the global and local coordinate systems, Para. [0393] of VOLKOV).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the ray tracing/casting-based sensor simulation method of KRISTENSEN with the ray tracing information of VOLKOV for the purpose of lossy compression (Para. [0393] of VOLKOV). In addition, Para. [0187] of KRISTENSEN explicitly suggests using ray-tracing hardware to quickly determine the positions and extents of objects within a world model.
Regarding claim 2, KRISTENSEN as modified teaches the method of claim 1, wherein the determining whether to calculate the electromagnetic response of the potential target comprises: determining to calculate the electromagnetic response of the potential target responsive to determining that the electromagnetic ray hit a facet of the potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data, Para. [0117] of KRISTENSEN; See also reflection data may be combined with position and orientation data (e.g., from GNSS and/or IMU sensors) to generate LIDAR point clouds … properties of objects in the scene such as positions or dimensions may be encoded into a suitable representation of a scene configuration … geometric description(s) of a scene may be encoded into a suitable network input(s). For example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN); and responsive to calculating the electromagnetic response, outputting the electromagnetic response to an electromagnetic model for input to an execution of an electromagnetic sensor simulation of the environment (a sensor model may learn to predict virtual sensor data from a representation of a scene configuration, Para. [0027] of KRISTENSEN; See also the depth map may be input into a channel of the sensor model 120, Para. [0061] of KRISTENSEN; See also geometric description(s) of a scene may be encoded into a suitable network input(s) … for example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN).
Regarding claim 3, KRISTENSEN as modified teaches the method of claim 2, wherein the calculation of the electromagnetic response comprises: determining, based on the electromagnetic ray hitting the facet of the potential target, a reflected electromagnetic ray being reflected from the potential target (reflection data may be combined with position and orientation data (e.g., from GNSS and/or IMU sensors) to generate LIDAR point clouds … properties of objects in the scene such as positions or dimensions may be encoded into a suitable representation of a scene configuration … geometric description(s) of a scene may be encoded into a suitable network input(s). For example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN); determining third electromagnetic ray information comprising a starting point and direction of the reflected electromagnetic ray relative to the local coordinate system of the potential target (a coordinate space is calculated which is aligned to the hair direction … for example, the z-axis may be determined to point into the hair direction, while the x and y axes are perpendicular to the z-axis … intersecting a ray with such an oriented bound requires first transforming the ray into the oriented space and then performing a standard ray/box intersection test, Para. [0749] of VOLKOV; See also the bounds of ray packets are stored in in full precision but individual ray points P1-P3 are transmitted as indexed offsets to the bounds … similarly, a plurality of local coordinate systems may be generated which use 8-bit integer values as local coordinates … the location of the origin of each of these local coordinate systems may be encoded using the full precision (e.g., 32-bit floating point) values, effectively connecting the global and local coordinate systems, Para. [0393] of VOLKOV); See also ray origin, ray direction, Paras. [0945]-[0947] of VOLKOV; See also global and local coordinate system, Para. [0393] of VOLKOV); and converting the third electromagnetic ray information to fourth electromagnetic ray information including the starting point and direction of the reflected electromagnetic ray relative to the global coordinate system of the environment (conversions between world space locations and corresponding image space locations of LIDAR data may be known, or determined, using intrinsic and/or extrinsic parameters—e.g., after calibration—of the LIDAR sensor(s) and/or the camera(s) that generated the image 302 … as such, because this relationship between world space and image space is known, and because the LIDAR data and the image data may have been captured substantially simultaneously, the LIDAR data distance predictions may be associated with the various objects 306—or their corresponding bounding shapes 304 or other property or dimension—in the image 302, Para. [0048] of KRISTENSEN; See also ray origin, ray direction, Paras. [0945]-[0947] of VOLKOV; See also global and local coordinate system, Para. [0393] of VOLKOV), the fourth electromagnetic ray information being used for a next potential target instead of the first electromagnetic ray information received (spawning a new ray from the hit shader to find the next closest intersection (with the ray origin offset, so the same intersection will not occur again), Para. [0463] of VOLKOV; See also traversal state 4400 also includes the ray in world space 4401 and object space 4402 as well as hit information for the closest intersecting primitive, Para. [0435] of VOLKOV).
Regarding claim 5, KRISTENSEN as modified teaches the method of claim 1, wherein the determining whether to calculate the electromagnetic response of the potential target comprises: determining to not calculate the electromagnetic response of the potential target responsive to determining that the electromagnetic ray does not hit a facet of the potential target (performance could be as simple as testing that the new CAN data does not create a false positive, Para. [0127] of KRISTENSEN); and the method further comprises: determining another potential target of the plurality of potential targets to the first electromagnetic ray information (spawning a new ray from the hit shader to find the next closest intersection (with the ray origin offset, so the same intersection will not occur again), Para. [0463] of VOLKOV; See also a LIDAR point 310 closest to a centroid of the bounding shape 304 and/or the cropped bounding shape 308 may be used to determine the final distance value, Para. [0053] of KRISTENSEN; See also RADAR target clustering and tracking may be used to determine the associations between RADAR points 312 and objects 306—or bounding shapes 304 or other property or dimension corresponding thereto, Para. [0055] of KRISTENSEN; See also a list of identified reflections may be pared down to a designated number (e.g., by identifying the designated number of reflections based on some metric such as reflections having the largest reflectivity values, closest range, etc.), the designated reflections and corresponding reflection characteristics may be encoded into corresponding vectors, and the vectors may be concatenated to form a single dimensional input vector … generally, the number of reflections may be selected to match the dimensionality of the input(s) into the sensor model 120 … by way of non-limiting example, a list of 180 reflections, each having values for 5 reflection characteristics (e.g., bearing, elevation, range, velocity, RCS), may be encoded into inputs of the sensor model 120 … as such, in some scenarios where there are fewer detected reflections than there are inputs into the sensor model 120, some of the input values may be null or zero, KRISTENSEN at Para. [0044])); converting the first electromagnetic ray information to other second electromagnetic ray information based on the other potential target (radio waves reflect off of certain objects and materials, and a RADAR sensor (which may correspond to the origin of the coordinate system in FIG. 2) may detect these reflections and reflection characteristics such as bearing, azimuth, elevation, range (e.g., time of beam flight), intensity, Doppler velocity, RADAR cross section (RCS), reflectivity, SNR, and/or the like … generally, reflections and reflection characteristics may depend on the objects in a scene, speeds, materials, sensor mounting position and orientation, etc. … reflection data may be combined with position and orientation data (e.g., from GNSS and IMU sensors) to generate point clouds … in FIG. 2, each of the RADAR points 212 represents the location of a detected reflection in the world space … collectively, the RADAR points 212 may form a point cloud representing detected reflections in the scene, Para. [0042] of KRISTENSEN; See also conversions between world space locations and corresponding image space locations of LIDAR data may be known, or determined, using intrinsic and/or extrinsic parameters—e.g., after calibration—of the LIDAR sensor(s) and/or the camera(s) that generated the image 302 … as such, because this relationship between world space and image space is known, and because the LIDAR data and the image data may have been captured substantially simultaneously, the LIDAR data distance predictions may be associated with the various objects 306—or their corresponding bounding shapes 304 or other property or dimension—in the image 302, Para. [0048] of KRISTENSEN), the other second electromagnetic ray information comprising the starting point and direction relative to a local coordinate system of the other potential target (ray origin, ray direction, Paras. [0945]-[0947] of VOLKOV; See also a coordinate space is calculated which is aligned to the hair direction. For example, the z-axis may be determined to point into the hair direction, while the x and y axes are perpendicular to the z-axis … intersecting a ray with such an oriented bound requires first transforming the ray into the oriented space and then performing a standard ray/box intersection test, Para. [0749] of VOLKOV; See also the bounds of ray packets are stored in in full precision but individual ray points P1-P3 are transmitted as indexed offsets to the bounds … similarly, a plurality of local coordinate systems may be generated which use 8-bit integer values as local coordinates … the location of the origin of each of these local coordinate systems may be encoded using the full precision (e.g., 32-bit floating point) values, effectively connecting the global and local coordinate systems, Para. [0393] of VOLKOV); determining, based on the other second electromagnetic ray information, whether the electromagnetic ray hits a facet of the other potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also reflection data may be combined with position and orientation data (e.g., from GNSS and/or IMU sensors) to generate LIDAR point clouds … properties of objects in the scene such as positions or dimensions may be encoded into a suitable representation of a scene configuration … geometric description(s) of a scene may be encoded into a suitable network input(s) … for example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN); and determining whether to calculate the electromagnetic response associated with the electromagnetic ray based on the other potential target based on whether the electromagnetic ray hits a facet of the other potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also a sensor model may learn to predict virtual sensor data from a representation of a scene configuration, Para. [0027] of KRISTENSEN).
Regarding claim 6, KRISTENSEN as modified teaches the method of claim 5, wherein the other potential target is a next-closest potential target (spawning a new ray from the hit shader to find the next closest intersection (with the ray origin offset, so the same intersection will not occur again), Para. [0463] of VOLKOV; See also traversal state 4400 also includes the ray in world space 4401 and object space 4402 as well as hit information for the closest intersecting primitive, Para. [0435] of VOLKOV).
Regarding claim 7, KRISTENSEN as modified teaches the method of claim 1, wherein the pre-calculated acceleration data structure represents the potential target for a duration of an electromagnetic sensor simulation (an important element in such simulation environments is the creation of realistic sensor data from a given scene configuration, Para. [0003] of KRISTENSEN; See also ray tracing is a technique in which a light transport is simulated through physically-based rendering, Para. [0003] of VOLKOV; [the simulated ray tracing through physically-based rendering is interpreted as being during the duration of the electromagnetic sensor simulation]).
Regarding claim 8, KRISTENSEN as modified teaches the method of claim 7, wherein a same pre-calculated acceleration data structure indicative of a single geometric profile is used to represent multiple potential targets having a same computer-aided design (CAD) model (geometric description(s) of a scene may be encoded into a suitable network input(s). For example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN; See also a computer aided design (CAD), Para. [0062] of KRISTENSEN).
Regarding claim 12, KRISTENSEN discloses a system (simulation system 400A may use real-time ray-tracing, Para. [0080] of KRISTENSEN) comprising: at least one processor (various functions may be carried out by a processor executing instructions stored in memory, Para. [0033] of KRISTENSEN) configured to: receive first electromagnetic ray information for a set of electromagnetic rays that are simulated in an environment that includes a plurality of potential targets (the LIDAR data, RADAR data, ultrasonic sensor data, image data, and/or other sensor data 102 that is used to derive an input scene configuration to the sensor model 120 (whether for training or in operation) may be generated in a virtual or simulated environment … for example, with respect to a virtual vehicle (e.g., a car, a truck, a water vessel, a construction vehicle, an aircraft, a drone, etc.), the virtual vehicle may include virtual sensors (e.g., virtual cameras, virtual LIDAR, virtual RADAR, virtual SONAR, etc.) that capture simulated or virtual data of the virtual or simulated environment … in addition to or alternatively from real-world data being used to derive an input scene configuration to the sensor model 120, simulated or virtual sensor data may be used and thus included in the sensor data 102, Para. [0063] of KRISTENSEN; See also RADAR target clustering and tracking may be used to determine the associations between RADAR points 312 and objects 306—or bounding shapes 304 or other property or dimension corresponding thereto, Para. [0055] of KRISTENSEN); determine a potential target of the plurality of potential targets that is closest to the first electromagnetic ray information (a LIDAR point 310 closest to a centroid of the bounding shape 304 and/or the cropped bounding shape 308 may be used to determine the final distance value, Para. [0053] of KRISTENSEN; See also RADAR target clustering and tracking may be used to determine the associations between RADAR points 312 and objects 306—or bounding shapes 304 or other property or dimension corresponding thereto, Para. [0055] of KRISTENSEN; See also a list of identified reflections may be pared down to a designated number (e.g., by identifying the designated number of reflections based on some metric such as reflections having the largest reflectivity values, closest range, etc.), the designated reflections and corresponding reflection characteristics may be encoded into corresponding vectors, and the vectors may be concatenated to form a single dimensional input vector … generally, the number of reflections may be selected to match the dimensionality of the input(s) into the sensor model 120 … by way of non-limiting example, a list of 180 reflections, each having values for 5 reflection characteristics (e.g., bearing, elevation, range, velocity, RCS), may be encoded into inputs of the sensor model 120 … as such, in some scenarios where there are fewer detected reflections than there are inputs into the sensor model 120, some of the input values may be null or zero, KRISTENSEN at Para. [0044]); convert the first electromagnetic ray information to second electromagnetic ray information based on the potential target (radio waves reflect off of certain objects and materials, and a RADAR sensor (which may correspond to the origin of the coordinate system in FIG. 2) may detect these reflections and reflection characteristics such as bearing, azimuth, elevation, range (e.g., time of beam flight), intensity, Doppler velocity, RADAR cross section (RCS), reflectivity, SNR, and/or the like … generally, reflections and reflection characteristics may depend on the objects in a scene, speeds, materials, sensor mounting position and orientation, etc. … reflection data may be combined with position and orientation data (e.g., from GNSS and IMU sensors) to generate point clouds … in FIG. 2, each of the RADAR points 212 represents the location of a detected reflection in the world space … collectively, the RADAR points 212 may form a point cloud representing detected reflections in the scene, Para. [0042] of KRISTENSEN; See also conversions between world space locations and corresponding image space locations of LIDAR data may be known, or determined, using intrinsic and/or extrinsic parameters—e.g., after calibration—of the LIDAR sensor(s) and/or the camera(s) that generated the image 302 … as such, because this relationship between world space and image space is known, and because the LIDAR data and the image data may have been captured substantially simultaneously, the LIDAR data distance predictions may be associated with the various objects 306—or their corresponding bounding shapes 304 or other property or dimension—in the image 302, Para. [0048] of KRISTENSEN); determine, based on the second electromagnetic ray information and a pre-calculated acceleration data structure indicative of a geometric profile of the potential target, whether a subset of the electromagnetic rays hit a facet of the potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also reflection data may be combined with position and orientation data (e.g., from GNSS and/or IMU sensors) to generate LIDAR point clouds … properties of objects in the scene such as positions or dimensions may be encoded into a suitable representation of a scene configuration … geometric description(s) of a scene may be encoded into a suitable network input(s) … for example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN); responsive to determining that the subset of electromagnetic rays hit the facet of the potential target, calculate an electromagnetic response of the subset of electromagnetic rays that hit the facet on the potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also a sensor model may learn to predict virtual sensor data from a representation of a scene configuration, Para. [0027] of KRISTENSEN); determine whether the subset of the electromagnetic rays hit any facet of the potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also reflection data may be combined with position and orientation data (e.g., from GNSS and/or IMU sensors) to generate LIDAR point clouds … properties of objects in the scene such as positions or dimensions may be encoded into a suitable representation of a scene configuration … geometric description(s) of a scene may be encoded into a suitable network input(s) … for example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN); and responsive to determining that the subset of the electromagnetic rays hit a facet of the potential target: calculate the electromagnetic response of the subset of the electromagnetic rays that hit the facet on the potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also a sensor model may learn to predict virtual sensor data from a representation of a scene configuration, Para. [0027] of KRISTENSEN; See also reflection data may be combined with position and orientation data (e.g., from GNSS and/or IMU sensors) to generate LIDAR point clouds … properties of objects in the scene such as positions or dimensions may be encoded into a suitable representation of a scene configuration … geometric description(s) of a scene may be encoded into a suitable network input(s) … for example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN); and output the electromagnetic response to an electromagnetic model for input to an execution of an electromagnetic sensor simulation of the environment; or responsive to determining that the subset of the electromagnetic rays do not hit any facet of the potential target, refrain from calculating or outputting the electromagnetic response of the subset of the electromagnetic rays that do not hit any facet of the potential target (the depth map may be input into a channel of the sensor model 120, Para. [0061] of KRISTENSEN; See also geometric description(s) of a scene may be encoded into a suitable network input(s) … for example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN).
Although KRISTENSEN teaches ray casting and ray tracing (Paras. [0002] & [0003]), KRISTENSEN appears to fail to explicitly disclose the first electromagnetic ray information including a starting point and direction of each respective electromagnetic ray relative to a global coordinate system of the environment and the second electromagnetic ray information comprising the starting point and direction relative to a local coordinate system of the potential target.
VOLKOV, however, is in the field of ray tracing/casting (Para. [0003] of VOLKOV) and teaches the first electromagnetic ray information including a starting point and direction of each respective electromagnetic ray relative to a global coordinate system of the environment (ray origin, ray direction, Paras. [0945]-[0947] of VOLKOV; See also global and local coordinate system, Para. [0393] of VOLKOV) and the second electromagnetic ray information comprising the starting point and direction relative to a local coordinate system of the potential target (a coordinate space is calculated which is aligned to the hair direction … for example, the z-axis may be determined to point into the hair direction, while the x and y axes are perpendicular to the z-axis … intersecting a ray with such an oriented bound requires first transforming the ray into the oriented space and then performing a standard ray/box intersection test, Para. [0749] of VOLKOV; See also the bounds of ray packets are stored in in full precision but individual ray points P1-P3 are transmitted as indexed offsets to the bounds … similarly, a plurality of local coordinate systems may be generated which use 8-bit integer values as local coordinates … the location of the origin of each of these local coordinate systems may be encoded using the full precision (e.g., 32-bit floating point) values, effectively connecting the global and local coordinate systems, Para. [0393] of VOLKOV).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the ray tracing/casting-based sensor simulation method of KRISTENSEN with the ray tracing information of VOLKOV for the purpose of lossy compression (Para. [0393] of VOLKOV). In addition, Para. [0187] of KRISTENSEN explicitly suggests using ray-tracing hardware to quickly determine the positions and extents of objects within a world model.
Regarding claim 19, KRISTENSEN discloses a computer-readable storage media comprising instructions that, when executed, cause at least one processor (simulation system 400A may use real-time ray-tracing, Para. [0080] of KRISTENSEN; See also various functions may be carried out by a processor executing instructions stored in memory, Para. [0033] of KRISTENSEN) to: receive first electromagnetic ray information for a set of electromagnetic rays that are simulated in an environment that includes a plurality of potential targets (the LIDAR data, RADAR data, ultrasonic sensor data, image data, and/or other sensor data 102 that is used to derive an input scene configuration to the sensor model 120 (whether for training or in operation) may be generated in a virtual or simulated environment … for example, with respect to a virtual vehicle (e.g., a car, a truck, a water vessel, a construction vehicle, an aircraft, a drone, etc.), the virtual vehicle may include virtual sensors (e.g., virtual cameras, virtual LIDAR, virtual RADAR, virtual SONAR, etc.) that capture simulated or virtual data of the virtual or simulated environment … in addition to or alternatively from real-world data being used to derive an input scene configuration to the sensor model 120, simulated or virtual sensor data may be used and thus included in the sensor data 102, Para. [0063] of KRISTENSEN; See also RADAR target clustering and tracking may be used to determine the associations between RADAR points 312 and objects 306—or bounding shapes 304 or other property or dimension corresponding thereto, Para. [0055] of KRISTENSEN); determine a potential target of the plurality of potential targets that is closest to the first electromagnetic ray information (a LIDAR point 310 closest to a centroid of the bounding shape 304 and/or the cropped bounding shape 308 may be used to determine the final distance value, Para. [0053] of KRISTENSEN; See also RADAR target clustering and tracking may be used to determine the associations between RADAR points 312 and objects 306—or bounding shapes 304 or other property or dimension corresponding thereto, Para. [0055] of KRISTENSEN; See also a list of identified reflections may be pared down to a designated number (e.g., by identifying the designated number of reflections based on some metric such as reflections having the largest reflectivity values, closest range, etc.), the designated reflections and corresponding reflection characteristics may be encoded into corresponding vectors, and the vectors may be concatenated to form a single dimensional input vector … generally, the number of reflections may be selected to match the dimensionality of the input(s) into the sensor model 120 … by way of non-limiting example, a list of 180 reflections, each having values for 5 reflection characteristics (e.g., bearing, elevation, range, velocity, RCS), may be encoded into inputs of the sensor model 120 … as such, in some scenarios where there are fewer detected reflections than there are inputs into the sensor model 120, some of the input values may be null or zero, KRISTENSEN at Para. [0044]); convert the first electromagnetic ray information to second electromagnetic ray information based on the potential target (radio waves reflect off of certain objects and materials, and a RADAR sensor (which may correspond to the origin of the coordinate system in FIG. 2) may detect these reflections and reflection characteristics such as bearing, azimuth, elevation, range (e.g., time of beam flight), intensity, Doppler velocity, RADAR cross section (RCS), reflectivity, SNR, and/or the like … generally, reflections and reflection characteristics may depend on the objects in a scene, speeds, materials, sensor mounting position and orientation, etc. … reflection data may be combined with position and orientation data (e.g., from GNSS and IMU sensors) to generate point clouds … in FIG. 2, each of the RADAR points 212 represents the location of a detected reflection in the world space … collectively, the RADAR points 212 may form a point cloud representing detected reflections in the scene, Para. [0042] of KRISTENSEN; See also conversions between world space locations and corresponding image space locations of LIDAR data may be known, or determined, using intrinsic and/or extrinsic parameters—e.g., after calibration—of the LIDAR sensor(s) and/or the camera(s) that generated the image 302 … as such, because this relationship between world space and image space is known, and because the LIDAR data and the image data may have been captured substantially simultaneously, the LIDAR data distance predictions may be associated with the various objects 306—or their corresponding bounding shapes 304 or other property or dimension—in the image 302, Para. [0048] of KRISTENSEN); determine, based on the second electromagnetic ray information and a pre-calculated acceleration data structure indicative of a geometric profile of the potential target, whether a subset of the electromagnetic rays hit a facet of the potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also reflection data may be combined with position and orientation data (e.g., from GNSS and/or IMU sensors) to generate LIDAR point clouds … properties of objects in the scene such as positions or dimensions may be encoded into a suitable representation of a scene configuration … geometric description(s) of a scene may be encoded into a suitable network input(s) … for example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN); responsive to determining that the subset of electromagnetic rays hit the facet of the potential target, calculate an electromagnetic response of the subset of electromagnetic rays that hit the facet on the potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also a sensor model may learn to predict virtual sensor data from a representation of a scene configuration, Para. [0027] of KRISTENSEN); determine whether the subset of the electromagnetic rays hit any facet of the potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also reflection data may be combined with position and orientation data (e.g., from GNSS and/or IMU sensors) to generate LIDAR point clouds … properties of objects in the scene such as positions or dimensions may be encoded into a suitable representation of a scene configuration … geometric description(s) of a scene may be encoded into a suitable network input(s) … for example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN); and responsive to determining that the subset of the electromagnetic rays hit a facet of the potential target: calculate the electromagnetic response of the subset of the electromagnetic rays that hit the facet on the potential target (when a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data … the rays may bounce from water, reflective materials, and/or windows … RADAR may be implemented similarly to LIDAR, Para. [0117] of KRISTENSEN; See also a sensor model may learn to predict virtual sensor data from a representation of a scene configuration, Para. [0027] of KRISTENSEN; See also reflection data may be combined with position and orientation data (e.g., from GNSS and/or IMU sensors) to generate LIDAR point clouds … properties of objects in the scene such as positions or dimensions may be encoded into a suitable representation of a scene configuration … geometric description(s) of a scene may be encoded into a suitable network input(s) … for example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN); and output the electromagnetic response to an electromagnetic model for input to an execution of an electromagnetic sensor simulation of the environment; or responsive to determining that the subset of the electromagnetic rays do not hit any facet of the potential target, refrain from calculating or outputting the electromagnetic response of the subset of the electromagnetic rays that do not hit any facet of the potential target (the depth map may be input into a channel of the sensor model 120, Para. [0061] of KRISTENSEN; See also geometric description(s) of a scene may be encoded into a suitable network input(s) … for example, two or three dimensional geometric model(s) may be arranged in a scene and rendered (e.g., from a desired point of view for the particular sensor being modeled) to form an image, which serve as an encoded scene configuration (or a portion thereof), Para. [0029] of KRISTENSEN).
Although KRISTENSEN teaches ray casting and ray tracing (Paras. [0002] & [0003]), KRISTENSEN appears to fail to explicitly disclose the first electromagnetic ray information including a starting point and direction of each respective electromagnetic ray relative to a global coordinate system of the environment and the second electromagnetic ray information comprising the starting point and direction relative to a local coordinate system of the potential target.
VOLKOV, however, is in the field of ray tracing/casting (Para. [0003] of VOLKOV) and teaches the first electromagnetic ray information including a starting point and direction of each respective electromagnetic ray relative to a global coordinate system of the environment (ray origin, ray direction, Paras. [0945]-[0947] of VOLKOV; See also global and local coordinate system, Para. [0393] of VOLKOV) and the second electromagnetic ray information comprising the starting point and direction relative to a local coordinate system of the potential target (a coordinate space is calculated which is aligned to the hair direction … for example, the z-axis may be determined to point into the hair direction, while the x and y axes are perpendicular to the z-axis … intersecting a ray with such an oriented bound requires first transforming the ray into the oriented space and then performing a standard ray/box intersection test, Para. [0749] of VOLKOV; See also the bounds of ray packets are stored in in full precision but individual ray points P1-P3 are transmitted as indexed offsets to the bounds … similarly, a plurality of local coordinate systems may be generated which use 8-bit integer values as local coordinates … the location of the origin of each of these local coordinate systems may be encoded using the full precision (e.g., 32-bit floating point) values, effectively connecting the global and local coordinate systems, Para. [0393] of VOLKOV).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the ray tracing/casting-based sensor simulation method of KRISTENSEN with the ray tracing information of VOLKOV for the purpose of lossy compression (Para. [0393] of VOLKOV). In addition, Para. [0187] of KRISTENSEN explicitly suggests using ray-tracing hardware to quickly determine the positions and extents of objects within a world model.
Claim 13 has substantially similar limitations as recited in claim 6 (with the incorporation of parent claim 5); therefore, it is rejected under 35 U.S.C. 103 for the same reasons.
Claims 14 and 20 have substantially similar limitations as recited in claim 3; therefore, they are rejected under 35 U.S.C. 103 for the same reasons.
Claim 16 has substantially similar limitations as recited in claim 7; therefore, it is rejected under 35 U.S.C. 103 for the same reasons.
Claim 17 has substantially similar limitations as recited in claim 8; therefore, it is rejected under 35 U.S.C. 103 for the same reasons.
Claims 4 and 15 are rejected under 35 U.S.C. § 103 as being unpatentable over KRISTENSEN et al. (U.S. Patent Application Publication No. 2021/0286923 A1) in view of VOLKOV et al. (U.S. Patent Application Publication No. 2023/0104199 A1), and further in view of IOFFE et al. (U.S. Patent Application Publication No. 2020/0412407 A1) and TRAINER (U.S. Patent Application Publication No. 2016/0202164 A1).
Regarding claim 4, KRISTENSEN as modified teaches the method of claim 3 (as shown above) but appears to fail to explicitly disclose wherein the calculation of the electromagnetic response further comprises: generating, based on incident fields with different polarizations, equivalent surface currents induced by the incident fields based on the first electromagnetic information received; examining multiple possible scattering paths of the reflected electromagnetic ray; responsive to one or more of the multiple possible scattering paths being required scattering paths of the reflected electromagnetic ray, computing, based on the required scattering paths, scattered fields of the reflected electromagnetic ray for different polarizations; and recording the scattered fields, a direction of departure, and a direction of arrival for each of the required scattering paths of the reflected electromagnetic ray.
IOFFE, however, is in the field simulating electromagnetic interactions including ray tracing using geometrical optics, or physical optics, or shooting and bouncing rays (Paras. [0001] & [0005] of IOFFE) and teaches generating, based on incident fields with different polarizations, equivalent surface currents induced by the incident fields based on the first electromagnetic information received (when modelling electromagnetic interactions with such a thin sheet, the interacting electromagnetic fields may be represented, for example using physical optics methods, by equivalent surface integrals of surface currents and surface charges along the sheet, whereby the surface currents and charges represent the tangential and/or normal electromagnetic field components of the interacting field, Para. [0049] of IOFFE; See also the radiation incident on the scattering object is represented by rays that are traced using geometrical optics and the interaction of the individual rays with surfaces, for example with the scattering structure or the antenna, is determined using physical optics by performing an integration covering the intersection of the individual rays with the surface, Para. [0040] of IOFFE; Regarding different polarizations, see also a bumper placed in front of the antenna may comprise several material layers, such as different painting layers, each having a thickness in the submillimeter range. Likewise, the antenna itself may comprise electrically small features such as different material layers of an electrode structure of the antenna or a radome of the antenna placed in front of the electrode structure. Although the distance of the bumper to the radar antenna is typically larger than the wavelength, it may be comparable to the size of the entire antenna so that the bumper may still be located in the near field region of the antenna, Para. [0003] of IOFFE; [different materials/wavelengths are interpreted as having different polarizations]).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the sensor simulation (including simulated sensor response) of KRISTENSEN with the generation of surface currents of IOFFE for the purpose of ensuring that the predetermined radiation pattern accurately represents the electromagnetic field radiated (Para. [0046] of IOFFE).
In addition, TRAINER is in the field of analyzing sensed/detected ray paths (Para. [0594] of TRAINER) and teaches examining multiple possible scattering paths of the reflected electromagnetic ray (appropriate scattering models are used to describe the effects of wavelength on the scattering pattern, Para. [0456] of TRAINER; See also multi-dimensional analysis creates a function of multiple variables of S (or functions of S) … typically each variable is measured from a different scattering angle range, different scattering plane, different light polarization, different light wavelength or a function of these different variables, Para. [0464] of TRAINER; See also multiple scattering occurs when the scattered light from a particle is scattered again by other particles, before being received by the detector, Para. [0471] of TRAINER); responsive to one or more of the multiple possible scattering paths being required scattering paths of the reflected electromagnetic ray, computing, based on the required scattering paths, scattered fields of the reflected electromagnetic ray for different polarizations (multi-dimensional analysis creates a function of multiple variables of S (or functions of S) … typically each variable is measured from a different scattering angle range, different scattering plane, different light polarization, different light wavelength or a function of these different variables, Para. [0464] of TRAINER); and recording the scattered fields, a direction of departure, and a direction of arrival for each of the required scattering paths of the reflected electromagnetic ray (each variable is measured from a different scattering angle range, Para. [0464] of TRAINER; See also angular scattering distribution for that particle will be recorded over a large number of scattering planes by all of the detector elements in the array, Para. [0324] of TRAINER; See also multi-dimensional analysis creates a function of multiple variables of S (or functions of S) … typically each variable is measured from a different scattering angle range, different scattering plane, different light polarization, different light wavelength or a function of these different variables, Para. [0464] of TRAINER; See also peak detector receives the total signal, Para. [0337] of TRAINER).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the sensor simulation (including simulated sensor response) of KRISTENSEN with the scattered ray analysis of TRAINER for the purpose of providing the best scatter measurement accuracy over an extended volume of particle dispersion (Para. [0593] of TRAINER).
Claim 15 has substantially similar limitations as recited in claim 4; therefore, it is rejected under 35 U.S.C. 103 for the same reasons.
Claims 9-11 and 18 are rejected under 35 U.S.C. § 103 as being unpatentable over KRISTENSEN et al. (U.S. Patent Application Publication No. 2021/0286923 A1) in view of VOLKOV et al. (U.S. Patent Application Publication No. 2023/0104199 A1), and further in view of KUPINSKI (U.S. Patent Application Publication No. 2024/0296617).
Regarding claim 9, KRISTENSEN as modified teaches the method of claim 8 (as shown above) but appears to fail to explicitly disclose determining, based on material properties of each of the one or more potential targets, polarimetric reflection coefficients for each of the one or more potential targets.
KUPINSKI, however, is in the field of ray tracing/casting (Para. [0004] of KUPINSKI) and teaches determining, based on material properties of each of the one or more potential targets, polarimetric reflection coefficients for each of the one or more potential targets (polarimetric importance sampling is made possible by interpreting these weights as probabilities. Here, the depolarization parameter is used to determine the probability of a coherent/fully polarized or unpolarized light-matter interaction occurrence. In the disclosed embodiment, polarimetric measurements can be conducted to determine the fractional contribution of fully-polarized versus unpolarized light to characterize the depolarizing components or materials, and then use this characterization to simulate the materials' partially-polarized contribution in the ray trace, Para. [0047] of KUPINSKI).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the ray tracing/casting-based sensor simulation method of KRISTENSEN (as modified by VOLKOV) with the polarimetric reflection coefficients-based ray tracing information of KUPINSKI for the purpose of reducing the computational needs for characterizing the scenes that involve light-matter interaction which can result in reduced cost and physical footprint of associated hardware and software (Para. [0005] of KUPINSKI).
Regarding claim 10, KRISTENSEN as modified teaches the method of claim 9, further comprising: generating a polarimetric reflection coefficient lookup table that includes the polarimetric reflection coefficients for each of the one or more potential targets (TD model database which is shown to retain dominant polarimetric properties for both diffuse and specular light-matter interactions … for each 4×4 Mueller matrix, the TD model consists of 8 parameters: the radiometric average throughput, the depolarization parameter, and 6 parameters to describe the dominant coherent process. These 8 TD parameters maintain physical constraints on the Mueller matrix when interpolated to unmeasured geometries … the TD model expression is a weighted sum of a normalized Mueller-Jones matrix and a completely depolarizing Mueller matrix … Polarimetric importance sampling is made possible by interpreting these weights as probabilities … here, the depolarization parameter is used to determine the probability of a coherent/fully polarized or unpolarized light-matter interaction occurrence. In the disclosed embodiment, polarimetric measurements can be conducted to determine the fractional contribution of fully-polarized versus unpolarized light to characterize the depolarizing components or materials, and then use this characterization to simulate the materials' partially-polarized contribution in the ray trace, Para. [0047] of KUPINSKI; [the polarimetric database is interpreted as corresponding to a polarimetric lookup table]).
Regarding claim 11, KRISTENSEN as modified teaches the method of claim 9, further comprising: determining, based on the material of each of the one or more potential targets being penetrable above a threshold, transmission coefficients for each of the one or more potential targets (the LIDAR sensors may be modeled as solid state LIDAR and/or as Optix-based LIDAR … in examples, using Optix-based LIDAR, the rays may bounce from water, reflective materials, and/or windows … texture may be assigned to roads, signs, and/or vehicles to model laser reflection at the wavelengths corresponding to the textures … RADAR may be implemented similarly to LIDAR … as described herein, RADAR and/or LIDAR may be simulated using learned sensors, ray-tracing techniques, and/or otherwise, Para. [0117] of KRISTENSEN).
Claim 18 has substantially similar limitations as recited in claims 10 and 11 (including the features of parent claim 9); therefore, it is rejected under 35 U.S.C. 103 for the same reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: LILJA (U.S. Patent Application Publication No. 2022/0085866 A1) teaches, at Para. [0133], “a resonator 632 is formed by a conductive member 517, where a standing wave pattern of electric fields and magnetic fields or surface currents can occur … more precisely, a resonator 632 is a conductive member, which favors electromagnetic resonance at a first frequency. An electromagnetic resonance is a natural oscillation phenomenon within a system, in which the electromagnetic energy of the oscillating system is primarily stored on the electric fields on one phase of the oscillation cycle, and on magnetic fields of the system on another phase of the oscillation cycle … in a conductive medium, the magnetic energy is closely connected to surface currents as is generally known by the electromagnetic constitutive relations.”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN P HOCKER whose telephone number is (571)272-0501. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rehana Perveen can be reached on (571)272-3676. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JOHN P. HOCKER
Examiner
Art Unit 2189
/JOHN P HOCKER/Examiner, Art Unit 2189
/REHANA PERVEEN/Supervisory Patent Examiner, Art Unit 2189