DETAILED ACTION
This action is in response to the claims filed on Nov. 28th, 2022. A summary of this action:
Claims 1-20 have been presented for examination.
Claim 11 are objected to because of informalities
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of both a mathematical concept and mental process without significantly more.
Claim(s) 1-3, 5-9, 12, 15-18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kanakagiri, Abhishek. Development of a virtual simulation environment for autonomous driving using digital twins. Diss. Technische Hochschule Ingolstadt, 2021 in view of CARLA, User Guide, for version 0.9.10 (dated Sept. 25, 2020 per revision log) of CARLA, URL: carla(dot)readthedocs(dot)io/en/0(dot)9(dot)10/
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kanakagiri, Abhishek. Development of a virtual simulation environment for autonomous driving using digital twins. Diss. Technische Hochschule Ingolstadt, 2021 in view of CARLA, User Guide, for version 0.9.10 (dated Sept. 25, 2020 per revision log) of CARLA, URL: carla(dot)readthedocs(dot)io/en/0(dot)9(dot)10/ and in view of Isteník, Matej. "Large boss characters in Unity engine: Reimplementation of gameplay mechanics from Shadow of the Colossus video game." (2015).
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kanakagiri, Abhishek. Development of a virtual simulation environment for autonomous driving using digital twins. Diss. Technische Hochschule Ingolstadt, 2021 in view of CARLA, User Guide, for version 0.9.10 (dated Sept. 25, 2020 per revision log) of CARLA, URL: carla(dot)readthedocs(dot)io/en/0(dot)9(dot)10/ and in view of Manivasagam, Sivabalan, et al. "Lidarsim: Realistic lidar simulation by leveraging the real world." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kanakagiri, Abhishek. Development of a virtual simulation environment for autonomous driving using digital twins. Diss. Technische Hochschule Ingolstadt, 2021 in view of CARLA, User Guide, for version 0.9.10 (dated Sept. 25, 2020 per revision log) of CARLA, URL: carla(dot)readthedocs(dot)io/en/0(dot)9(dot)10/ and in view of Orangemittens, “How to Reduce Mesh Poly Count”, Forum Post on Sims 4 studio forum, March 23rd, 2015. URL: sims4studio(dot)com/thread/1031/reduce-mesh-poly-count
This action is non-final
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim interpretation
The claims are given their broadest reasonable interpretation by a person of ordinary skill in the art. See Phillips v. AWH Corp., 415 F.3d 1303, 1316, 75 USPQ2d 1321, 1329 (Fed. Cir. 2005) as discussed in MPEP § 2111; also see in MPEP § 2111: “Because applicant has the opportunity to amend the claims during prosecution, giving a claim its broadest reasonable interpretation will reduce the possibility that the claim, once issued, will be interpreted more broadly than is justified. In re Yamamoto, 740 F.2d 1569, 1571 (Fed. Cir. 1984); In re Zletz, 893 F.2d 319, 321, 13 USPQ2d 1320, 1322 (Fed. Cir. 1989) ("During patent examination the pending claims must be interpreted as broadly as their terms reasonably allow.”) … Further, the broadest reasonable interpretation of the claims must be consistent with the interpretation that those skilled in the art would reach.”
MPEP § 2111.01(I): “Under a broadest reasonable interpretation (BRI), words of the claim must be given their plain meaning, unless such meaning is inconsistent with the specification. The plain meaning of a term means the ordinary and customary meaning given to the term by those of ordinary skill in the art at the relevant time. The ordinary and customary meaning of a term may be evidenced by a variety of sources, including the words of the claims themselves, the specification, drawings, and prior art. However, the best source for determining the meaning of a claim term is the specification - the greatest clarity is obtained when the specification serves as a glossary for the claim terms. Phillips v. AWH Corp., 415 F.3d 1303, 1315, 75 USPQ2d 1321, 1327 (Fed. Cir. 2005) (en banc) ("[T]he specification ‘is always highly relevant to the claim construction analysis. Usually, it is dispositive; it is the single best guide to the meaning of a disputed term.’" (quoting Vitronics Corp. v. Conceptronic Inc., 90 F.3d 1576, 1582 (Fed. Cir. 1996)).”
“¶ 18: “The present disclosure may use the terms "synthetic," "virtual," and "simulated" interchangeably to refer to any data and/or objects that are generated or calculated using software model(s).” – thus, these are defined to be interchangeable by the specification.
The Examiner strongly suggests that the claim only use one of these terms consistently to ensure express clarity in the claim itself, as the each of these terms convey the same meaning defined in ¶ 18.
Claim Objections
Claim 11 is objected to because of the following informalities:
Claim 11 recites, in part: “An apparatus comprising…” various “simulators” and recites no particular structure. In view of ¶ 37, which conveys these are software components executed by a processor, the Examiner interprets that this is intended to be a claim for a computer with a processor and memory storing instructions, wherein the processor executes instructions to perform operations comprising the steps of the method, and the Examiner suggests amending the claim to remove any potential nonce terms (§ 112(f)) so as to ensure express clarity in the claim itself and instead to recite that these are merely acts to be performed by a computer and its processor (¶ 37).
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of both a mathematical concept and mental process without significantly more.
Step 1
Claim 17 is directed towards the statutory category of a process.
Claims 1 and 11 are directed towards the statutory category of an apparatus.
Claims 11 and 17, and the dependents thereof, are rejected under a similar rationale as representative claim 1, and the dependents thereof.
Step 2A – Prong 1
The claims recite an abstract idea of both a mental process and mathematical concept. Some of the dependent claims add a mental process to this abstract idea.
As a point of clarity on the focus of the claimed advance, it lies solely in the realm of allegedly faster math calculations.
“¶ 18: “The present disclosure may use the terms "synthetic," "virtual," and "simulated" interchangeably to refer to any data and/or objects that are generated or calculated using software model(s) [on a computer/in a computer environment].” As per the claim interpretation above. MPEP § 2106.04(a)(2)(I): “It is important to note that a mathematical concept need not be expressed in mathematical symbols, because "[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula." In re Grams, 888 F.2d 835, 837 and n.1, 12 USPQ2d 1824, 1826 and n.1 (Fed. Cir. 1989).” And MPEP § 2106.04(a)(2)(I)(C): “There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.”
¶ 47: “In order to increase simulation [calculation, but on a computer] performance but without impacting cost (e.g., using more computation resources)…. Stated differently, using a hybrid mesh scheme with mesh objects of different fidelities, an AV simulation can be accelerated [faster math calculations], for example, for real-time rendering. [bare assertation of an improvement to technology in a token post-solution activity, wherein the focus of the claimed advance is the faster math calculation]”. MPEP § 2106.05(I): “An inventive concept "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself." Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016). See also Alice Corp., 573 U.S. at 21-18, 110 USPQ2d at 1981 (citing Mayo, 566 U.S. at 78, 101 USPQ2d at 1968 (after determining that a claim is directed to a judicial exception, "we then ask, ‘[w]hat else is there in the claims before us?") (emphasis added)); RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract"). Instead, an "inventive concept" is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim as a whole amounts to significantly more than the judicial exception itself. Alice Corp., 573 U.S. at 27-18, 110 USPQ2d at 1981 (citing Mayo, 566 U.S. at 72-73, 101 USPQ2d at 1966)”
MPEP § 2106.04(I): “The Supreme Court’s concern that drives this "exclusionary principle" is pre-emption. Alice Corp., 573 U.S. at 216, 110 USPQ2d at 1980. The Court has held that a claim may not preempt abstract ideas, laws of nature, or natural phenomena, even if the judicial exception is narrow (e.g., a particular mathematical formula such as the Arrhenius equation). See, e.g., Mayo, 566 U.S. at 79-80, 86-87, 101 USPQ2d at 1968-69, 1971 (claims directed to "narrow laws that may have limited applications" held ineligible); Flook, 437 U.S. at 589-90, 198 USPQ at 197 (claims that did not "wholly preempt the mathematical formula" held ineligible). This is because such a patent would "in practical effect [] be a patent on the [abstract idea, law of nature or natural phenomenon] itself." Benson, 409 U.S. at 71- 72, 175 USPQ at 676. The concern over preemption was expressed as early as 1852. See Le Roy v. Tatham, 55 U.S. (14 How.) 156, 175 (1852) ("A principle, in the abstract, is a fundamental truth; an original cause; a motive; these cannot be patented, as no one can claim in either of them an exclusive right.")…. Flook, 437 U.S. at 591-92, 198 USPQ2d at 198 ("the novelty of the mathematical algorithm is not a determining factor at all");…Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151, 120 USPQ2d 1473, 1483 (Fed. Cir. 2016) ("a new abstract idea is still an abstract idea") (emphasis in original)”
Should further clarification be sought on the focus on the claimed advance, see the 2B WURC consideration below. The Examiner strongly suggests this, given how much this abstract idea is already in use in a variety of particular technological implementations (i.e. pre-emption) and that it is in the common knowledge of POSTIA.
See MPEP § 2106.04: “...In other claims, multiple abstract ideas, which may fall in the same or different groupings, or multiple laws of nature may be recited. In these cases, examiners should not parse the claim. For example, in a claim that includes a series of steps that recite mental steps as well as a mathematical calculation, an examiner should identify the claim as reciting both a mental process and a mathematical concept for Step 2A Prong One to make the analysis clear on the record.”
To clarify, see the USPTO 101 training examples, available at https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility.
The mathematical concept recited in claim 1 is:
generating synthetic sensor data based on low-fidelity mesh data representing an object;
generating a synthetic driving scene including the object, the generating the synthetic driving scene is based at least in part on high-fidelity mesh data representing the object;
The above are math calculations in textual form.
¶ 18: “The present disclosure may use the terms "synthetic," "virtual," and "simulated" interchangeably to refer to any data and/or objects that are generated or calculated using software model(s).” - the term “generating” is merely a textual replacement for calculations. Doing calculations is a math concept, and doing them by a computer/in a computer environment (e.g. “using software model(s)”) is merely instructions to do the math calculations on a computer/in a computer environment.
See ¶¶ 40-41 to further clarify, noting the use of the term “calculate”; similar to ¶ 42; ¶ 46: “In some instances, the computational complexity (e.g., the physics calculations) for the LIDAR sensor simulation model 112 may be high”; ¶ 16: “The physics calculations for a LIDAR sensor may be significantly more complex than for a camera”. ¶ 39: “A physics-based sensor simulation model may have a mathematical or signal processing model that generates synthetic sensor data or sensor return signals closely resembling a sensor signal or data produced by a corresponding actual physical sensor.”
As to the “mesh data”, see ¶ 17, which conveys that it is merely math relationships in geometry: “A mesh is a collection of vertices, edges, and faces that describe the shape of a three dimensional (3D) object, where a vertex is a single point, an edge is a straight line segment connecting two vertices, and a face is a flat surf ace enclosed by edges and can be of any shape (e.g., a triangle or generally a polygon)…. That is, a synthetic object can be created in the form of a mesh (mesh object or mesh data) and can be placed in any suitable location in a synthetic driving environment. Mesh data of a higher fidelity level may have a greater number of vertices, faces, edges, and/or polygons than mesh data of a lower fidelity. In general, high-fidelity mesh data may have a higher fidelity (e.g., more accurate and/or more detailed) in representing a certain object than low-fidelity mesh data but may take more processing power (or a longer time) to process than the low-fidelity mesh data”
To clarify, the “mesh” is considered as mathematical relationships in geometry, as the mesh is a series of mathematical relationships representing the geometry of an shape by representing small portions of the geometry (i.e. with smaller shapes), and the mathematical relationships between these smaller shapes, e.g. representing the geometry of a square by using four smaller squares (example of mesh element), wherein the four smaller squares have their dimensions and locations related to the larger square by math relationships (e.g., suppose the square is of area 1, then each smaller square would have an area of ¼ (i.e. the math relationship of the area of the larger square, divided by the number of smaller squares), wherein the positioning of the smaller squares is mathematically described by the relationship between the smaller squares and the larger square, e.g. suppose the origin of the larger square is a x=0,y=0; with an opposing corner at x=1, y=1 (x-y axes), then the four smaller squares would be mathematically located to have their inside corner at the center of the larger square, i.e. x= ½, y=½.
A visual example of these geometrical mathematical relationships of a mesh is provided below:
PNG
media_image1.png
751
1032
media_image1.png
Greyscale
To further clarify, the Examiner notes such mathematical relationships in geometry have been previously found to be abstract, for example see MPEP § 2106.04(a)(2) “iii. a mathematical relationship between enhanced directional radio activity and antenna conductor arrangement (i.e., the length of the conductors with respect to the operating wave length and the angle between the conductors), Mackay Radio & Tel. Co. v. Radio Corp. of America, 306 U.S. 86, 91, 40 USPQ 199, 201 (1939) (while the litigated claims 15 and 16 of U.S. Patent No. 1,974,387 expressed this mathematical relationship using a formula that described the angle between the conductors, other claims in the patent (e.g., claim 1) expressed the mathematical relationship in words)”
In other words, the limitations are merely math calculations in textual form, but do it on a computer/in a computer environment, wherein the mesh math relationships are used in later calculations as input data (e.g. with the above simple example, assign to each vertex/node an equation, then calculate each of the equations), wherein a “low-fidelity”, i.e. a simple mesh with only a few elements, e.g. the one above, but in 3D as a cube to represent an object of a cardboard box, and a “high-fidelity mesh”, i.e. a mesh with more elements, e.g. double the density of the mesh elements for the cardboard box, is used in the calculations, wherein by doing less calculations there is a faster math concept.
MPEP § 2106.04(I): “The Supreme Court’s concern that drives this "exclusionary principle" is pre-emption. Alice Corp., 573 U.S. at 216, 110 USPQ2d at 1980. The Court has held that a claim may not preempt abstract ideas, laws of nature, or natural phenomena, even if the judicial exception is narrow (e.g., a particular mathematical formula such as the Arrhenius equation). See, e.g., Mayo, 566 U.S. at 79-80, 86-87, 101 USPQ2d at 1968-69, 1971 (claims directed to "narrow laws that may have limited applications" held ineligible); Flook, 437 U.S. at 589-90, 198 USPQ at 197 (claims that did not "wholly preempt the mathematical formula" held ineligible). This is because such a patent would "in practical effect [] be a patent on the [abstract idea, law of nature or natural phenomenon] itself." Benson, 409 U.S. at 71- 72, 175 USPQ at 676. The concern over preemption was expressed as early as 1852. See Le Roy v. Tatham, 55 U.S. (14 How.) 156, 175 (1852) ("A principle, in the abstract, is a fundamental truth; an original cause; a motive; these cannot be patented, as no one can claim in either of them an exclusive right.").Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151, 120 USPQ2d 1473, 1483 (Fed. Cir. 2016) ("a new abstract idea is still an abstract idea") (emphasis in original).”
executing a vehicle compute process based on the synthetic driving scene and the synthetic sensor data – more math calculations in textual form, but do them in a computer environment.
¶ 13: “For instance, an AV stack or AV compute process performing the perception, prediction, planning, and control may be implemented using one or more of software code and/or firmware code.” Then ¶ 16: “Accordingly, in some examples, the LIDAR sensor simulation may be unable to meet real-time requirements, meaning that the LIDAR sensor simulation cannot generate a synthetic LIDAR point cloud fast enough to enable an AV compute process to calculate a perception, prediction, path, or control operation in real-time”; ¶ 22 (noting ¶ 18 for its definition of simulation): “The vehicle compute process may simulate operation(s) and/or behavior(s) of a synthetic vehicle driving in the synthetic driving scene.”
Claim 11 recites a similar abstract idea for similar reasons as above of:
a sensor simulator to generate synthetic light detection and ranging (LIDAR) data based on low-fidelity mesh data representing the object;
and a vehicle simulator to simulate at least one of an operation or a behavior of a vehicle based on the driving scene and the LIDAR data.
Claim 19 recites a similar abstract idea for similar reasons as discussed above of:
generating, by the computer-implemented system, using a camera sensor simulation model, an image of the synthetic driving scene based at least in part on the first mesh data;
and generating, by the computer-implemented system, using a light detection and ranging (LIDAR) sensor simulation model, a LIDAR point cloud based on the second mesh data;
and generating, by the computer-implemented system, a simulation of at least one of an operation or a behavior of a vehicle in the synthetic driving scene based on the image and the LIDAR point cloud.
To clarify, see the above citations, including ¶ 18: “The present disclosure may use the terms "synthetic," "virtual," and "simulated" interchangeably to refer to any data and/or objects that are generated or calculated using software model(s).” - i.e. math calculations in textual form expressing desired results of what is to be calculated with mere instructions to do the calculations in a computer environment. E.g. ¶ 40: “In this regard, the LIDAR sensor simulation model 112 may simulate [calculate], for example, an update rate, a beam characteristic, a resolution, a range characteristic, a scan frequency/angle, horizontal and vertical field of views (FOV s ), a blind spot ( a distortion), and/or LID AR head movements of the real LID AR sensor.”
Under the broadest reasonable interpretation, the claim recites a mathematical concept – the above limitations are steps in a mathematical concept such as mathematical relationships, mathematical formulas or equations, and mathematical calculations. If a claim, under its broadest reasonable interpretation, is directed towards a mathematical concept, then it falls within the Mathematical Concepts grouping of abstract ideas. In addition, as per MPEP § 2106.04(a)(2): “It is important to note that a mathematical concept need not be expressed in mathematical symbols, because "[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula." In re Grams, 888 F.2d 835, 837 and n.1, 12 USPQ2d 1824, 1826 and n.1 (Fed. Cir. 1989). See, e.g., SAP America, Inc. v. InvestPic, LLC, 898 F.3d 1161, 1163, 127 USPQ2d 1597, 1599 (Fed. Cir. 2018)”
See MPEP § 2106.04(a)(2).
To clarify, see the USPTO 101 training examples, available at https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility.
As such, the claims recite an abstract idea of a mathematical concept.
Step 2A, prong 2
The claimed invention does not recite any additional elements that integrate the judicial exception into a practical application. Refer to MPEP §2106.04(d).
The following limitations are merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f), including the “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more”:
Preamble of claim 1, the “simulator[s]” given the interpretation of record above in view of ¶ 37, and the computer of claim 17 are considered as mere instructions to do it on a computer/in a computer environment with generic computer components (¶ 37 to clarify these are generic).
The following limitations are adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g):
Claim 11 a driving scenario simulator to render a driving scene, wherein the rendering comprises placing an object in the driving scene, the object being based on high-fidelity mesh data; - mere data gathering, when the term “simulator” is taken in view of ¶ 37 as above
Claim 17 – the “obtaining…” steps are mere data gathering
Should the limitation in claim 1 of: “executing a vehicle compute process based on the synthetic driving scene and the synthetic sensor data” be found to not be abstract then the Examiner notes these would be considered akin to the cutting of hair with scissors of In re Brown (MPEP § 2106.05(f and g)) as an insignificant application and mere instructions to “apply it”.
Should the limitations in claim 19 of “generating…an image…generating….a LIDAR point cloud…” be found not to be an abstract idea, they would be considered mere data gathering for the math calculation in textual form at the next step.
A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. See MPEP § 2106.04(d).
MPEP 2106.04(II)(A)(2) “…Instead, under Prong Two, a claim that recites a judicial exception is not directed to that judicial exception, if the claim as a whole integrates the recited judicial exception into a practical application of that exception. Prong Two thus distinguishes claims that are "directed to" the recited judicial exception from claims that are not "directed to" the recited judicial exception…Because a judicial exception is not eligible subject matter, Bilski, 561 U.S. at 601, 95 USPQ2d at 1005-06 (quoting Chakrabarty, 447 U.S. at 309, 206 USPQ at 197 (1980)), if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application. See, e.g., RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract"); Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (eligibility "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself."). For a claim reciting a judicial exception to be eligible, the additional elements (if any) in the claim must "transform the nature of the claim" into a patent-eligible application of the judicial exception, Alice Corp., 573 U.S. at 217, 110 USPQ2d at 1981, either at Prong Two or in Step 2B” and MPEP § 2106(I): “Mayo, 566 U.S. at 80, 84, 101 USPQ2dat 1969, 1971 (noting that the Court in Diamond v. Diehr found “the overall process patent eligible because of the way the additional steps of the process integrated the equation into the process as a whole,”” – and see MPEP § 2106.05(e).
To further clarify, MPEP § 2106.04(II)(A)(1): “Alice Corp., 573 U.S. at 216, 110 USPQ2d at 1980 (citing Mayo, 566 US at 71, 101 USPQ2d at 1965). Yet, the Court has explained that ‘‘[a]t some level, all inventions embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas,’’ and has cautioned ‘‘to tread carefully in construing this exclusionary principle lest it swallow all of patent law” See also Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335, 118 USPQ2d 1684, 1688 (Fed. Cir. 2016) ("The ‘directed to’ inquiry, therefore, cannot simply ask whether the claims involve a patent-ineligible concept, because essentially every routinely patent-eligible claim involving physical products and actions involves a law of nature and/or natural phenomenon").”
As a point of clarity, RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract"); Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (eligibility "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself." discussed in MPEP § 2106.04(II)(A)(2) as well as MPEP § 2106.04(I): “Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151, 120 USPQ2d 1473, 1483 (Fed. Cir. 2016) ("a new abstract idea is still an abstract idea") (emphasis in original).
The claimed invention does not recite any additional elements that integrate the judicial exception into a practical application. Refer to MPEP §2106.04(d).
Step 2B
The claimed invention does not recite any additional elements/limitations that amount to significantly more.
The following limitations are merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f), including the “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more”:
Preamble of claim 1, the “simulator[s]” given the interpretation of record above in view of ¶ 37, and the computer of claim 17 are considered as mere instructions to do it on a computer/in a computer environment with generic computer components (¶ 37 to clarify these are generic).
The following limitations are adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g):
Claim 11 a driving scenario simulator to render a driving scene, wherein the rendering comprises placing an object in the driving scene, the object being based on high-fidelity mesh data; - mere data gathering, when the term “simulator” is taken in view of ¶ 37 as above
Claim 17 – the “obtaining…” steps are mere data gathering
Should the limitation in claim 1 of: “executing a vehicle compute process based on the synthetic driving scene and the synthetic sensor data” be found to not be abstract then the Examiner notes these would be considered akin to the cutting of hair with scissors of In re Brown (MPEP § 2106.05(f and g)) as an insignificant application and mere instructions to “apply it”. WURC evidence below.
Should the limitations in claim 19 of “generating…an image…generating….a LIDAR point cloud…” be found not to be an abstract idea, they would be considered mere data gathering for the math calculation in textual form at the next step. WURC evidence below.
In addition, the above insignificant extra-solution activities are also considered as well-understood, routine, and conventional activities, as discussed in MPEP § 2106.05(d):
Epic Games, “The Automative Field Guide”, “BUILDING AN OPEN AUTOMOTIVE PLATFORM AND DATA MODEL WITH UNREAL ENGINE”, 2020, URL: cdn2(dot)unrealengine(dot)com/automotive-field-guide-v1-1-683681366(dot)pdf.
First, the Examiner notes the instant assignee is GM Cruise, so see page 64: “General Motors’ autonomous division Cruise used Unreal Engine to create its own end-to-end simulation tool dubbed The Matrix. The company regularly performs 30,000 tests per day, generating 300 terabytes of data from each drive. The Matrix enables Cruise to test for one-off scenarios that are hard to replicate in the real world, like an object falling off a curb and into the vehicle’s path.” – similarly, on the same page: “Uber Technology Group has been using game-engine technology as a means of visualizing and validating test scenarios for its fleet of self-driving vehicles”, and page 65: “Powered by Unreal Engine and maintained by Toyota and Intel, CARLA is a free, open-source simulator that’s been designed to support development, training, and validation for autonomous driving systems. The simulator enables you to visualize all your test scenarios and their results in real time. The open-source nature democratizes autonomous testing, and ensures that even the smallest startup has access to world-class testing tools and environments.”
Then, see chapter 5, page 91, col. 1 last paragraph; similarly see pages 94-95, then see the one on Toyota at 94-95, also page 97 col. 1 ¶¶ 1-3, then see Carla on pages 98-99; and pages 100-101 for “Carsim”, etc.
To further clarify, page 23: “Autonomous driving”: “Autonomous vehicles rely on physics-based sensors to detect the world around them. Their physical cameras, radar, LiDAR, and AI require thousands of hours and millions of miles of training in their collective effort to replace human drivers… That testing can be done in two ways, but regardless of whether you’re doing physical or virtual testing—or both—you need a tool for processing the volume of data generated. When paired with the proper plugins, game engines can not only visualize the reams of physical sensor data, but can also be used to build complex scenes and test scenarios for visualization. If you opt for virtual testing exclusively, game engines provide greater levels of adaptability, given the sheer number of testing scenarios you can run and visualize overnight…” (see remaining portions of page 23); then see page 24: “Once a sensor is connected to a game engine, what you can do with the output data is limited only by your ability”
Yilmaz, Erdal. 2020. Radar Sensor Plugin for Game Engine Based Autonomous Vehicle Simulators. Master's thesis, Harvard Extension School. Abstract: “Simulations play an essential role in developing autonomous vehicles and verifying their safe operation. They enable research in sensor fusion with synthetic data and allow low-cost experimentation with different design decisions, like the number, location, and specifications of various sensors on vehicles. Apart from industrial simulation tools, researchers have been using game engine-based simulators, mainly to generate training data for artificial intelligence systems and to test their decision making in virtual worlds. These simulators currently support camera and lidar sensors but lack a physics-based radar implementation. Automotive radars that serve for advanced driver-assistance systems today are evolving into imaging radar systems for autonomous vehicles.” – See fig. 1.1: “Game engine based autonomous vehicle simulators. CARLA and Deep- Drive are based on Unreal Engine. LGSVL Simulator and Baidu Apollo Simulator are based on Unity game engine. AirSim simulates quadcopters and cars, it can work with both Unreal and Unity. Udacity Simulator, which is based on Unity, was developed as a teaching tool.” – then, page 3: “In recent years, academic and industrial researchers released multiple simulators for autonomous vehicle research based on popular game engines. [7, 8, 9, 10]. These simulators spawn numerous actors within a predefined environment and allow the user or an algorithm to control their motion (Fig4.5). It's also possible to record multimodal sensor data for training deep neural networks and experiment with sensor fusion architectures for more accurate and robust object classification, visual-inertial odometry [11], or reinforcement learning [12]… A game engine mainly organizes user interface, physics engine, and rendering [13]. 3D games provide visual feedback to the users by simulating camera models and allow them to control camera positions and orientations… There is another rendering technique called raytracing, which traces a high number of light rays between sources and cameras within a scene, producing photorealistic results [14]… Today, with improved GPU architectures, it's possible to get real-time raytracing (RTRT) in game engines (Fig.1.3)…. Inherently, lidar simulations can use low-density raytracing because of the spatial coherence of laser beams.” To page 4: “Current simulators have implemented various camera imperfections like lens distortion. They also provide lidar implementations utilizing a raycasting mechanism within the game engine framework, and currently, they lack a physics-based radar sensor.” – to further clarify, § 2.3.3, incl.: “The two most popular game engines are Unreal Engine by Epic Games [55] and Unity by Unity Technologies [56]…. Games provide excellent simulated environments for software-in-the-loop and hardware-in-the-loop testing. Also, for perception research, they enable capturing camera and other sensor data to train neural networks… Researchers at Microsoft released AirSim as a drone and car simulator using Unreal Engine [8]. Intel Labs, TRI, and CVC created CARLA simulator, also based on Unreal [7]. LG Silicon Valley Lab released LGSVL simulator using Unity [9]. There are reviews available, including a few other simulators [57, 58, 59]; we will only highlight these three popular projects.” And on page 23, ¶ 1: “…CARLA comes with a full suite of autonomous driving related sensors: camera, depth camera, lidar, radar, IMU, and GPS…”
Rivero, Jose Roberto Vargas, et al. "The effect of spray water on an automotive LIDAR sensor: A real-time simulation study." IEEE Transactions on Intelligent Vehicles 7.1 (2021): 57-72. § I: “…In order to reduce the reality gap, there is a tendency towards the use of a photorealistic- and physics-based simulation in academic and commercial products [6]–[8]. Physically based sensors and actuators can potentially reduce the validation effort by using models known to be correct… Recently, however, ray tracing has reached a level in which is possible to trace millions of rays in few milliseconds using GPUs. With this technique it is possible to simulate: LIDAR, Radar [9], ultrasonic sensors [10] and cameras [8], [11]. Besides ray tracing, which is done by the render engine, the movement and interaction of the different objects in the environment, can be simulated using a physics engine [12]–[15] like Pybullet [16]
or PhysX [17] or can be animated. The animation can be done manually or data driven [18]…”
Davar, Sherry Alldén, et al. "Virtual generation of lidar data for autonomous vehicles." (2017). Bachelor’s Thesis. UNIVERSITY OF GOTHENBURG. CHALMERS UNIVERSITY OF TECHNOLOGY. § 1.2: “There are many ways to construct a software to simulate a lidar sensor, this thesis approach to the problem is to determine whether the simulator could be created by using a game engine. This due to the fact that game engines comes with features that could speed up the development, such as a 3D environment with physics. A number of game engines are available on the market, providing similar functionality. The game engine of choice within this project is Unity, as it meets all the requirements. These requirements includes a free license, an integrated 3D physics engine with ray-casting, and a fully featured development environment.” - and see § 3.2.1: “As described in section 2.2.1, collision detection determines when multiple objects intersect. The physics engine inside Unity that is used for this project, handles collision detection with components called colliders. Colliders define an object’s shape for the purpose of managing collisions [21]. Each object in the simulator makes use of a collider in order for the ray-cast to be able to hit objects in the environment. Colliders can either use mathematical representations of geometric shapes, or 3D meshes to define the shape of an object. The former are called primitive colliders, and often come in the shape of spheres, capsules and boxes as shown in figure 3.2. The latter is called a mesh collider and it works like a collection of a number of smaller colliders representing the triangles in the mesh. The three figures below (figure 3.3, 3.4 and 3.5) gives a visual representation of a mesh collider and the
corresponding object it is attached to.” – see fig. 3.5. Then see § 4.1, then §§ 4.3.2-4.3.3: “…On the first try, the environment was populated with 500 objects with mesh colliders, where each mesh contains 128 triangles each (see figure 4.9). On a second try, the environment was populated with 500 objects of the same type, each featuring a single compound collider made up by three primitive colliders of the box shape (see figure 4.8). The resulting difference in performance was noticeable” and § 5.2.1: “When shapes were approximated using compound colliders instead of mesh colliders, it could be seen from the performance tests that mesh colliders increased the total use of CPU time (39 %) allocated by the physics parts, then what the compound colliders did (27.7 %). This difference could differ greatly and depends on the amount of triangles there are in an object’s mesh. The more detailed an object is, the more
triangles there are. In our case there was only 194 triangles in the object’s mesh and only 500 objects of them. This can be considered to be small, for reasons that games can contain thousands of objects with greater details and more triangles in its mesh…Taking this into consideration, the amount of objects and the amount of triangles in the object’s mesh, complicates the validation of the performance. In general, it is concluded that if it was possible to approximate an object with hundreds or thousands of triangles in its mesh, with just a few primitive colliders, and if the approximation of its shape is close enough, then the performance increase makes it worth it.” – i.e. how the abstract idea itself could be used in a game engine is conventional and commonplace, as its simply using mesh colliders to simulate LiDAR (more evidence on this below), wherein the only alleged novelty lies solely in the “mathematical representation” used in the mesh collider, i.e. use less mesh elements to make [e.g. less triangles] in the triangular mesh, which is purely a concept in geometry.
However, the claim and the specification do not even describe the use of a mesh collider, or any technological details on how this would be implemented in an unconventional fashion in the software technology of game engines, but rather make it clear it is directed solely to the abstract idea itself, with no particular technological implementation of how it is used in an unconventional manner (more evidence below on this point).
del Egido Sierra, Javier, et al. "Autonomous vehicle control in CARLA challenge." Transportation research procedia 58 (2021): 69-74. Abstract: “robotic arms) but not a realistic appearance and not allowing real-time systems, not being able to recreate complex traffic scenes. CARLA (Dosovitskiy, A., 2017) open-source AV simulator is designed to be able to train and validate control and perception algorithms in complex traffic scenarios with hyper-realistic environments. CARLA simulator allows to easily modify on-board sensors such as cameras or LiDAR, weather conditions and also the traffic scene to perform specific traffic cases. In Summer 2019, CARLA launched its driving challenge to allow everyone to test their own control techniques under the same traffic scenarios, scoring its performance regarding traffic rules. In this paper, the Robesafe researching group approach will be explained, detailing vehicle motion control and object detection adapted from Smart Elderly Car (Gómez-Huélamo, C., 2019) that lead the group to reach the 4th place in Track 3 challenge, where HD Map, Waypoints and environmental sensors data (LiDAR, RGB cameras and GPS) were provided.” - see § 1 to further clarify: “This simulator provides a wide range of on-board sensors, including cameras (RGB, semantic segmentated and depth) and LiDAR, performing the most common AV perception sensors. These sensors are completely adjustable to project needing, being able to modify their location regarding to the vehicle and also their main features, such as pixels width and height, FOV and distortion for cameras, and number of channels, points-per-second and rotation frequency for LiDAR sensors. Furthermore, these sensors information and other data relative to the dynamic objects in scene are published using CARLA ROS-bridge, a ROS package that allows communications between the simulator and ROS, enabling interoperability with extern systems such as control and perception modules. Additionally, traffic scenes can be recreated by using ScenarioRunner, a CARLA developed platform based on OpenScenario (Jullien, J., 2009) to define environments of a pre-fixed scenario to allow repeatability, defining the town, static and dynamic objects, weather conditions and also driving behaviours to cope with.” See §§ 2-3 for more details on the CARLA AD Challenge, including: “When CARLA AD simulator was created, a huge leap in quality was made, freely releasing hyper-realistic
simulated environments for autonomous vehicles training and validation” and see fig. 3-4; then § 4: “Moreover than 200 participants organized in 69 teams submitted to CARLA AD Challenge in some of its four available tracks, but only 10 of them could success.” – i.e. it was widely known and in common use.
Gómez-Huélamo, Carlos, et al. "Train here, drive there: Simulating real-world use cases with fully-autonomous driving architecture in carla simulator." Workshop of Physical Agents. Cham: Springer International Publishing, 2020. Abstract and § 1, and fig. 2(a); then § 4: “Some of the most used simulators in the field of AV are Microsoft Airsim [34], recently updated to include AV although it was initially designed for drones, NVIDIA DRIVE PX [5], aimed at providing AV and driver assistance functionality powered by deep learning, ROS development studio [24], fully based on the Cloud concept where a cluster of computers allows the parallel simulation of as many vehicles as required, V-REP [30], with an easy integration with ROS and a countless number of vehicles and dynamic parameters and CARLA [9], which is the newest open-source simulator for AV based on Unreal engine.” See § 4.3 for a discussion on how “straightforward” it is to used virtual sensors in CARLA.
Patel, Parth Hasmukhbhai. Development of 3D simulation environment for testing and calibration of autonomous vehicles. Diss. Technische Hochschule Ingolstadt, 2021. Release date: 03/28/2023. Abstract , § 2.2, then see §3 for its use of “CARLA”, similar in § 4 and fig. 16, then see § 4.1: “Block 8 represent the vehicle static mesh making, which is to create a low poly (low density) mesh file to detect the ego vehicle on the lidar points, A low poly mesh file can be created in the modelling software like Blender or Maya or can be generated directly from the vehicle mesh file already exported from the block 2 in the Unreal editor. Furthermore, the vehicle static mesh file needs to be assigned in the vehicle custom collision section available in block 4.”
To further clarify, as shown above and below, a popular commonplace simulation program for this field is CARLA, so see:
CARLA, User Guide, for version 0.9.10 (dated Sept. 25, 2020 per revision log) of CARLA, URL: carla(dot)readthedocs(dot)io/en/0(dot)9(dot)10/:
“This tutorial explains how to create more accurate collision boundaries for vehicles (relative tothe original shape of the object). These can be used as physics collider, compatible with collisiondetection, or as a secondary collider used by raycast-based sensors such a the LIDAR to retrieve more accurate data… Raycast colliders — This approach requires some basic 3D modelling skills. A secondary collider is added to the vehicle so that ray cast-based sensors such as the LIDAR retrieve more precise data.” – step 1: “First of all, the original mesh of the vehicle is necessary to be used as reference. For the sake of learning, this tutorial exports the mesh of a CARLA vehicle”, step 2: “Generate a low density mesh”, step 2.1: “Open a 3D modelling software and, using the original mesh as reference, model a low density mesh that stays reliable to the original” – then, see step 4, in particular: “Select the CustomCollision element and add the SM_sc_<model_of_vehicle>.fbx in the Static mesh property”, with the below screen capture demonstrating that this is in the Unreal Engine GUI (right-hand side, assignment of static mesh).
PNG
media_image2.png
200
400
media_image2.png
Greyscale
To clarify, see the Sensors reference portion of the User Guide, subsection: “LIDAR sensor”: “This sensor simulates a rotating LIDAR implemented using ray-casting. The points are computed by adding a laser for each channel distributed in the vertical FOV. The rotation is simulated computing the horizontal angle that the Lidar rotated in a frame. The point cloud is calculated by doing a ray-cast for each laser in every step.” – see instant disclosure, ¶ 40. Also, note the other sensors that CARLA was able to simulate in 2021, including cameras.
Thus, when POSITA sought out how to implement the abstract idea to actually do an AV simulation, one would simply turn to what was purely conventional in the field of use, as this instant specification omits the necessary details that were well-known in the art at the time.
Thus, there is no improvement to technology, but rather simply a claim directed solely to a long-standing abstract idea that is in common use in various ways, including manual ways, in game engines and AV simulations using game engines. To be clear, POSITA would not recognize any invention here, for simply turning to the user documentation for a popular open-source simulator for AV simulations they would find this abstract idea and a particular, undisclosed by the instant specification, technological implementation (using it as a collision mesh in the game engine) already done.
Wagener, Nicolas, Jobst Beckmann, and Lutz Eckstein. "Efficient Creation of 3D-Virtual Environments for Driving Simulators." 2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME). IEEE, 2022. Abstract, then see § I: “Driving simulations are utilized within all stages of the development process of automated driving. They exist in various forms, beginning from simple desktop racks over virtual-reality solutions to highly dynamic driving simulators [23].”, e.g. § II.A: “CARLA [1] is an open-source driving simulator based on the popular Unreal Engine 4 games engine. The main use case for the simulation is research into autonomous driving. The underlying games engine enables a high-fidelity virtual environment. This environment is based on a large asset library containing many objects like urban layouts, buildings, vehicles, vegetation and more. Creation of the environment is accomplished by manually placing the desired assets into a scene. While it is possible to use the simulation with dynamically created road networks based on OpenDRIVE [10] or OpenStreetMap [18], the need to manually place assets still limits Carla to only a few supported city maps with many environmental features and increases the necessary effort to create high-fidelity custom environments.”
Vukić, Matija, et al. "Unity based urban environment simulation for autonomous vehicle stereo vision evaluation." 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). IEEE, 2019. Abstract: “In this paper we simulate motion and sensors from a single vehicle equipped with a stereo camera setup. The program environment Unity is used for designing the simulation, and behavioral scripts are executed with C# programming language” then see § II.B: “Cameras in Unity are objects that transform a three dimensional scene to a two-dimensional one which can be reproduced to viewer’s screen. Position of camera defines the viewpoint and other components define the size and shape of the region that will be reproduced to viewer. A camera in the real world simulates perspective projection and this effect is for creating a realistic image. A camera that does not change the size of objects with distance is known as orthographic. Unity supports both views of the scene and they are known as camera projections. Perpendicular plane is set to the cameras forward direction to define the limit to how far camera can see. It is called the clipping plane, because objects at greater distance from the camera are clipped. There is also a corresponding near clipping plane that defines distance from camera at which objects will not be seen.”
Guerra, Winter, et al. "Flightgoggles: Photorealistic sensor simulation for perception-driven robotics using photogrammetry and virtual reality." 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019. Abstract, then see fig. 2, noting the use of “Unity Rendering Engine” [the game engine discussed above], then see § III.B: “1) Mesh level of detail: For each object mesh in the environment, three meshes with different levels of detail (LOD), i.e., polygon count and texture resolution, were generated: low, medium, and high. For meshes with lower levels of detail, textures were down sampled using subsampling and subsequent smoothing. During simulation, the real-time render pipeline improves render performance by selecting the appropriate level of detail object mesh and texture based on the size of the object mesh in camera image space. GPU VRAM usage can be decreased further by limiting the maximum level of detail across all meshes, through the userselectable quality profiles.” – to clarify, § III.A, # 2: “To achieve photorealistic RGB camera rendering, FlightGoggles uses the Unity Game Engine High Definition Render Pipeline (HDRP) [20]. Using HDRP, cameras rendered in FlightGoggles have characteristics similar to those of real-world cameras including motion blur, lens dirt, bloom, real-time reflections, and precomputed ray-traced indirect lighting”
Joisher, Karan, Suhaib Khan, and Omkar Ranadive. "Simulation Environment for Development and Testing of Autonomous Learning Agents." 2nd International Conference on Advances in Science & Technology (ICAST). 2019. Abstract: “Training an autonomous agent in the real world is a cumbersome process. The hardware modules required are expensive and they need routine maintenance. The data collection process is time-consuming and it is difficult to collect data in different conditions and scenarios. Moreover, testing these agents in the real world requires many permissions and could be potentially hazardous. This paper introduces a virtual environment for training and testing of autonomous driving agents. The environment has features like customizable car parameters and sensors, different terrains, customizable data extraction parameters, and simulated pedestrian and vehicular traffic. The environment can connect to any learning agent via a communication interface” – then, § II.A: “Currently, gaming environments are being widely used to train and test autonomous agents. Gaming environments have a high degree of realism and thus, they are preferred by many researchers. We have considered the case of one of the most widely used environment – GTA 5. GTA 5 has a very high degree of graphic realism, and it has different terrains and different types of vehicles.” – then subsection D: “Currently, a lot of research is being done on this topic and big companies like Microsoft are working on developing high-fidelity environments. Microsoft recently announced that they are working on AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles [2], an environment designed for training and testing of autonomous agents. CARLA is another example of such an environment [3]. Other companies like Baidu and Nvidia are also working on building such platforms. Such research proves that the training and testing of autonomous agents is a real problem and systems similar to ours are being currently built by many companies to solve this problem”
Won, Minseok, and Shiho Kim. "Simulation Driven Development Process Utilizing Carla Simulator for Autonomous Vehicles." SIMULTECH. 2022. Abstract, then see § 3.1 including its discussion of CARLA, then see § 4.1 along with fig.1, followed by § 4.3 and fig. 3-4.
Jayaraman, Arvind, Ashley Micks, and Ethan Gross. Creating 3d virtual driving environments for simulation-aided development of autonomous driving and active safety. No. 2017-01-0107. SAE Technical Paper, 2017. Abstract, then see section “Software tools used”: “This paper focuses on the use of Unreal Engine 4 to create 3D virtual driving environments, and the MathWorks toolchain as the environment in which data is processed. Unreal Engine is a free, open source video game engine” – then see the section “Virtual Camera Setup”: “The Unreal Engine-based virtual simulator platform provides photo-realistic images which can be used to facilitate prototyping computer vision algorithms in MATLAB; these extract useful information such as vehicles, pedestrians, lanes etc. from images. To facilitate the workflow, we set up a virtual camera sensor in Unreal Engine using a Scene Capture 2D camera actor. The Scene Capture 2D actor is available with Unreal Engine and can be placed anywhere in the virtual driving environment. The images rendered by this actor are transmitted via the shared memory interface to MATLAB. The horizontal field of view angle, as well as image size and resolution, are adjustable in the Scene Capture 2D component. In addition, it is also feasible to add arbitrary post processing effects to the camera in order to model lens distortion effects often present in actual camera sensors. Image disturbances can also be introduced after transmission of the image to MATLAB. To facilitate operating on these images in MATLAB, the rendered images are transposed and the image format is changed; the memory layout of the image is column-major, with separate image planes for each RGB color channel.” – also, see their discussion on pages 3-4 for simulating LiDAR, then on page 4, col. 1 last paragraph: “Another technique explored was modeling the LiDAR sensor using ray tracing, which is a method that is already provided as part of the Unreal Engine programming environment. We found that performing ray tracing for a large number of LiDAR beams using this method is computationally very expensive.” And in section “Future Work”: “With the release of the initial sensor models in the beta version of MathWorks interface to Unreal Engine, we developed proof of concept capabilities to model camera and LiDAR sensors.” – to clarify, see related Van Fleet et al., US 2019/0302259
Meng, Wei, et al. "ROS+ unity: An efficient high-fidelity 3D multi-UAV navigation and control simulator in GPS-denied environments." IECON 2015-41st Annual Conference of the IEEE Industrial Electronics Society. IEEE, 2015. Abstract, § III.A, and in particular note fig. 2 shows that “Sensor simulation (Lidar, Camera, etc.)” is provided by the “Unity 3D server” – then see § IV.C: “…The scanning frequency is around 40 Hz. In Unity3D, one straightforward technique is to use Physics. Raycast scripting API to develop our customized LIDAR sensor model. In our Raycast script, scanning range and angle should be defined in details… To let multiple UAVs run LIDAR sensors simultaneously, we turn to GPU for Raycast calculation…”; and for the “Camera”: “The camera sensor development is easy in the most of the game engine. In our system, one camera is mounted static on the front of the UAV platform. The main purpose of the camera is for target detection.”
As another note on conventionality, the Examiner notes that the use of multiple mesh fidelities is also well known in game engines.
Bonet, “Level of Detail (LOD): Quick Tutorial”, Blog Post, June 18th, 2021. URL: gamedeveloper(dot)com/programming/level-of-detail-lod-quick-tutorial – “What is LOD or Level of Detail in Unity”: “The Level of Detail you’d use in Unity is rather simple. It’s just about changing the mesh you render at different distances from your camera…. The Unity LOD system has a series of “levels”: LOD0 is the most detailed version of your mesh. You want to use it when your player sits in front of it to appreciate all its features. LOD1 is a less detailed version that commonly has fewer vertices and polygons. Youuse it when your user gets more far away. LOD2 is a low-detail version of your mesh, used when the object is far away.
Culled : once your object is too far away, you can decide to skip rendering it at all.” – to clarify, see section on “The Basics”: “Create different resolutions of your meshes (lower/higher depending on your current mesh).” – e.g. in “Manual LOD generation”: “Here, it is all about editing your meshes in your favorite 3d software (Blender you said?). There are universal modifiers that help you here, such as decimate/subdivide, but most of the time you have to edit the result to make it sexy.” And see the resulting figure captioned “Decimate Example by All3DP” – also see the section oon “Setting Up LOD Levels”, in particular see the figure showing the ”Example LOD Hierarchy” in Unity game engine, noting # 3: “In your LOD Group, drag the new child game object into the LOD level you want it to be in. If you don’t have one, you can create a new LOD level by right-clicking on the horizontal bar.”
For relevance, ¶ 49 of the instant specification describes using decimation in this exact same manner.
Now, see Epic Games, “The Automative Field Guide”, “BUILDING AN OPEN AUTOMOTIVE PLATFORM AND DATA MODEL WITH UNREAL ENGINE”, 2020, URL: cdn2(dot)unrealengine(dot)com/automotive-field-guide-v1-1-683681366(dot)pdf. Page 42: “Levels of detail (LODs): LODs are an effective way to optimize your meshes and scenes for performance and frame rate goals. The LOD management system in Unreal Engine chooses the most appropriate mesh to show at runtime. LOD creation can be automated with Blueprints or Python scripts; LODs are reusable from one mesh to another.” – i.e. this is merely a conventional software feature found in most game engines, including the one in use in GM Cruise’s “The Matrix” (page 64 of Epic Games).
To clarify, see Claypool, Mark, “The Game Development Process”, “Visual Design and Production”, Lecture Notes from Worcester Polytechnic Institute. 2006. URL: web(dot)cs(dot)wpi(dot)edu/~imgd1001/e06/ - see pages 34-35 discussing “To keep frame rates consistent, use level-of-detail (LOD) meshes… Multiple versions of object, progressively lower Levels… When far away, use low level… When close, use higher level” – i.e. this is a longstanding practice in game engines, commonly found in video game design classes.
Pajarola, Renato. "Advanced 3D Computer Graphics." (2000). University of California Irvine. URL web(dot)mat(dot)upc(dot)edu/toni(dot)susin/files/IntroductionComputerGraphicsRenato(dot)pdf – see § 1.2 on “Mesh simplification”, particularly discussion of “Level-of-detail”, then see § 1.3: “A multiresolution model or triangulation refers to a representation of the triangle mesh that maintains approximations of the input mesh at different resolutions, or level-of-detail (LOD). A triangulation for a particular LOD can efficiently be extracted from such a multiresolution model without recomputing all approximation errors and simplification operations. In a very basic manner, the sequential simplification of a triangle mesh already represents a multiresolution model. More flexible approximations can be achieved if the simplification and refinement operations are not restricted to the complete global ordering, but if they can be applied in a partial-order to the current mesh. Typically these multiresolution meshes organize the operations hierarchically, including restrictions of partial-orders across the hierarchy”
Luebke, David. “Level of Detail: A Brief Overview”. Lecture Notes from Unviersity of Virgina. CompSci 344, Spring 2015. URL: courses(dot)cs(dot)duke(dot)edu/spring15/cps124/classwork/10_terrain/LOD(dot)pdf – see slide 3: “Known as Level of Detail or LOD A.k.a. polygonal simplification, geometric simplification, mesh reduction, decimation, multiresolution modeling, …” – then see slides 4-9, noting 9 in particular provides a brief history dating it back to 1976 for “flight simulators” then see slide 10: “Traditional LOD in a nutshell: – Create LODs for each object separately in a preprocess – At run-time, pick each object’s LOD according to the object’s distance (or similar criterion) ● Since LODs are created offline at fixed resolutions, we call this discrete LOD”
DeCoro, Christopher, and Natalya Tatarchuk. "Real-time mesh simplification using the GPU." Proceedings of the 2007 symposium on Interactive 3D graphics and games. 2007. § 1 ¶ 1, then see § 2.1 including ¶ 1
In fact, LODs is a “common practice” that is typically performed manually.
Gao, Xifeng, Kui Wu, and Zherong Pan. "Low-poly mesh generation for building models." ACM SIGGRAPH 2022 Conference Proceedings. 2022. Abstract: “As a common practice, game modelers manually craft low-poly meshes for given 3D building models in order to achieve the ideal balance between the small element count and the visual similarity. This can take hours and involve tedious trial and error” and § 1: “Typically, a highly detailed building model can have complicated topology and geometry properties, i.e., disconnected components, open boundaries, non-manifold edges, and self-intersections (see Fig. 1 and Table 1). On the other hand, it can be expensive to render detailed building models for real-time applications, and the level-of-details (LOD) technique has been widely used to maximize the run-time performance. Instead of sticking to a highly detailed (high-poly) 3D model, the LOD renderer uses a low-element-count (low-poly) mesh at the distant view. As a result, the low-poly mesh must have a reasonably small element count while preserving the appearance of the high-poly model as much as possible.” – then, see § 2.
Guerra, Winter, et al. "Flightgoggles: Photorealistic sensor simulation for perception-driven robotics using photogrammetry and virtual reality." 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019. Abstract, then see fig. 2, noting the use of “Unity Rendering Engine” [the game engine discussed above], then see § III.B: “1) Mesh level of detail: For each object mesh in the environment, three meshes with different levels of detail (LOD), i.e., polygon count and texture resolution, were generated: low, medium, and high. For meshes with lower levels of detail, textures were down sampled using subsampling and subsequent smoothing. During simulation, the real-time render pipeline improves render performance by selecting the appropriate level of detail object mesh and texture based on the size of the object mesh in camera image space. GPU VRAM usage can be decreased further by limiting the maximum level of detail across all meshes, through the userselectable quality profiles.” – to clarify, § III.A, # 2: “To achieve photorealistic RGB camera rendering, FlightGoggles uses the Unity Game Engine High Definition Render Pipeline (HDRP) [20]. Using HDRP, cameras rendered in FlightGoggles have characteristics similar to those of real-world cameras including motion blur, lens dirt, bloom, real-time reflections, and precomputed ray-traced indirect lighting”
The claimed invention is directed towards an abstract idea of both a mathematical concept and a mental process without significantly more.
Regarding the dependent claims
Claims 2-4 are further limiting the abstract idea.
Claim 5 adds another math calculation to the abstract idea for similar reasons as above, and should it be found its not then it’s purely a conventional (see above WURC evidence) insignificant extra-solution activity of an insignificant computer implementation.
Claim 6 is merely another math calculation for similar reasons as above, and should it be found its not its merely a well-known (evidence above) nominal/tangential extra-solution activity
Claim 7 – further limiting the math concept, as well as adding a mental process given the generality recited herein (i.e. a person mentally observe data and make a perceptive observation/evaluation, e.g. looking at a visual depiction of a LiDAR point cloud using commonplace software [in WURC evidence above; e.g. see CARLA section Retrieve simulation data and how easy it is to use “Meshlab” to produce a human comprehensible visual representation, i.e. in the “LIDAR output after being processed in Meshlab” visual, one can readily see stairs, columns, etc.]) so as to then make a mental judgement, e.g. observe a wall or pedestrian in front of the vehicle, break and/or swerve to avoid the accident
Claim 8 – mere data gathering that is WURC in view of MPEP §2106.05(d)(II): “i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)… iii. Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log); iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93;“; also see MPEP § 2106.05(a): “vii. Providing historical usage information to users while they are inputting data, in order to improve the quality and organization of information added to a database, because "an improvement to the information stored by a database is not equivalent to an improvement in the database’s functionality," BSG Tech LLC v. Buyseasons, Inc., 899 F.3d 1281, 1287-88, 127 USPQ2d 1688, 1693-94 (Fed. Cir. 2018); and”
Claim 9 – mere instructions to invokes generic computer components as a tool to perform the abstract idea. Also, see CARLA, system requirements section which require both a CPU and GPU, and the Rendering options section which further clarify that the GPU does the rendering (see the “Off-screen vs no-rendering” to clarify, along with the warning in the no-rendering mode), i.e. it’s purely conventional. See Davar 2017 as well, fig. 4.10 (note its “CPU time”) and table 4.1, i.e. commercial off-the-shelf components readily available and used in their ordinary capacity.
Also, ¶ 37 of the instant disclosure further clarifies that this is merely generally linking to a particular technological environment, as :” The simulation platform 100 may include various hardware components, for example, including but not limited to, CPU(s), GPU(s), memory, cloud resources, etc. The sensor simulator 110, the driving scenario simulator 130, and the vehicle simulator 140 may be executed by the CPU(s) and/or GPU(s).” and by omission of the well-known details in the art conveys that such CPUs and GPUs are WURC
Claim 10 is mere data gathering that is WURC in view of MPEP § 2106.05(d)(II). Also, see ¶ 2, and the above WURC evidence, and ¶¶ 25 and 29-32, i.e. all that is needed to this capturing are vehicles equipped with sensors such as LiDAR and camera, and these are well-known. If more evidence is needed, the Examiner takes OFFICAL NOTICE that Google has long used such vehicles to capture data of driving environments, e.g. for use in Google StreetView in Google Maps
Claims 12-16 and 18-20 are rejected under similar rationales as discussed above.
The claimed invention is directed towards an abstract idea of both a mathematical concept and a mental process without significantly more.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5-9, 12, 15-18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kanakagiri, Abhishek. Development of a virtual simulation environment for autonomous driving using digital twins. Diss. Technische Hochschule Ingolstadt, 2021 in view of CARLA, User Guide, for version 0.9.10 (dated Sept. 25, 2020 per revision log) of CARLA, URL: carla(dot)readthedocs(dot)io/en/0(dot)9(dot)10/
As a point of clarity regarding the below rationale, CARLA is used to not only modify the teachings of Kanakagiri, but also in view of MPEP § 2131.01: “Normally, only one reference should be used in making a rejection under 35 U.S.C. 102. However, a 35 U.S.C. 102 rejection over multiple references has been held to be proper when the extra references are cited to: …(B) Explain the meaning of a term used in the primary reference; or (C) Show that a characteristic not disclosed in the reference is inherent.”
Regarding Claim 1
Kanakagiri teaches:
A computer-implemented system, comprising: one or more processing units; and one or more non-transitory computer-readable media storing instructions, when executed by the one or more processing units, cause the one or more processing units to perform operations comprising: (Kanakagiri, abstract: “This thesis work focuses on development of framework for Virtual simulation of Autonomous Vehicles(AVs). AVs are complex embedded systems consisting of various Software and Hardware modules. Testing of AVs is crucial to access the safety in different scenarios before they can be deployed on public roads… The second part of the work focuses on developing a general simulation framework using Robot Operating System (ROS), CARLA Simulator, and Autoware Software Stack….”
[a] generating synthetic sensor data based on low-fidelity mesh data representing an object; [b] generating a synthetic driving scene including the object, the generating the synthetic driving scene is based at least in part on high-fidelity mesh data representing the object; [c] and executing a vehicle compute process based on the synthetic driving scene and the synthetic sensor data.
Kanakagiri teaches this. See the abstract, then see § 1.1 ¶¶ 1-2, noting in particular the use of CARLA.
Then see § 2.2.1 for a brief summary of CARLA: “Carla is a research simulator for autonomous vehicles that is open-source. It was created with the intention of assisting in the development, training, and validation of autonomous urban driving systems. It includes artificial intelligence-powered weather effects, autos, and wandering pedestrians, as well as numerous perspectives with depth, segmentation, and advanced lidar. Because it is based on Unreal Engine 4, it may be utilized on both Windows and Linux platforms.” – and see the remaining subsections discussing the “Physics” of CARLA, then the “3D assets” including: “…There are numerous pedestrian and vehicle models [examples of objects]. Every street has its own look and feel. There are many different sorts of roads, ranging from congested city streets with wide walkways to bridges, suburbs, and forests. The behavior of cars and pedestrians suggests the presence of artificial intelligence. The actors navigate around the city in a very realistic manner: Cars come to a halt at junctions, negotiate right of way, or stop to allow pedestrians to pass. People cross the streets near junctions on walkways. They can also generate dangerous situations by unexpectedly stepping onto the road…” then subsection “Sensor emulation”: “A rotating sensor is implemented as a ray-cast lidar in Carla. The number of channels, points per second, frequency, and range are all fully customizable variables. After some fine-tuning, real-world issues such as displaced points caused by fast object movement may arise. The only thing missing from the implementation of lidar emulation is the consideration of material type while creating the point cloud. Carla doesn’t have any radar simulation. There is no emulation of GPS; a precise position can be obtained. An extra algorithm must be developed to randomize the reported position in a specific area using normal distribution in order to attain real-like GPS position.” - see fig. 2.6 for a visual depiction of scenes in CARLA
Then, see § 3.3, fig. 3.3 in particular: “The simulation environment coupled with a Software Stack and Middleware provides the complete environment in which different aspects of AD such as Perception, Localization, Planning and Control can be tested. Figure 3.3 shows the complete environment used in this project.” – to clarify, § 3.3.2: “RoadRunner is an interactive editor for creating 3D scenarios for testing and simulating automated driving systems. Create region-specific road signs and markings to modify roadway scenes. Signs, signals, guardrails, and road damage, as well as greenery, buildings, and other 3D objects, can all be added by the user. Set and configure traffic signal timing, phases, and vehicle trajectories at crossings with RoadRunner’s tools6.” And see the figure, then see § 3.3.3: “The 3D model of the MuSHR [the vehicle; § 3.1 and fig. 3.1 clarifies] is built in Blender with the Camera and LiDAR sensors [has both simulated camera and LiDAR sensors]. The MuSHR open source project provides the blender compatible files of the chassis and the sensors used in the vehicle. Using blender these components are assembled and scaled, the degrees of freedom for wheels and chassis can be adjusted. This model is then imported in the CARLA simulator for further simulation. The details of importing the map in CARLA, modelling and importing of MuSHR are discussed in the Chapter 4.”
See § 4.2.1 to further clarify on the 3D model in Blender; § 4.2.2 for the “Vehicle Import in CARLA”: “The detailed instruction of a Vehicle model are well documented in CARLA documentation page [20]. The 3D vehicle model generated in Blender serves as Skeletal Mesh in CARLA simulator. The Skeletal Mesh is used to create an Animation Blueprint, Vehicle Physics Assets, Vehicle Blueprint. CARLA Simulator is built using Unreal Engine, it implies that the physics of any actor inside the simulation is based upon Unreal Engine…. During simulation if any other actor’s collision boundary enters the MuSHR’s collision boundary a collision event is detected. Animation Blueprint helps to detect the pose based on the bones defined and thereby final pose of Skeletal mesh is set for every frame. A low density, low poly mesh produced from the real car model is referred to as a static mesh [for the collision detection, i.e. its already has multiple mesh fidelities, to clarify this is commonly referred to in UE/CARLA as the “Collision mesh” on page 40, further clarification below]. Figure 4.3 shows the MuSHR vehicle model inside the imported HD map.”;
Then, see fig. 4.11, for the execution of the compute process see the “Autoware” which is the “Software stack for perception, location, planning, control” to be executed, see § 4.5 and fig. 4.14 which clarifies that this is exchanging “Sensor data” with “Vehicle control” and the like as detailed in § 4.5
and § 4.4.1: “To create a point-cloud of a region, a test vehicle mounted with LiDAR is driven through this region. While driving the LiDAR sensor records the data of the environment by emitting and detecting the light rays. The generated point-clouds are stitched together to create the point-cloud map. Since the map used in this project is not of any particular region but an amalgamation of multiple regions in and around the city of Ingolstadt and lack of availability of test vehicle this method cannot be used to create the point cloud. Instead of using test vehicle to create the point-cloud, a virtual vehicle is driven in CARLA simulator(in the generated map) and this vehicle is mounted with virtual LiDAR sensor which collects the point-clouds. The same method is used but all the operations are performed inside the CARLA simulator. CARLA ROS Bridge is the sub module of CARLA which can used to perform this operation. ROS is used to acquire any sensor data, in this particular case LiDAR point-clouds. CARLA ROS bridge can be easily installed by following the CARLA documentation. 5” – note, footnote # 5 is: “carla(dot)readthedocs(dot)io/en/0(dot)9(dot)10/ros_installation/”- i.e. this is describing how synthetic LIDAR sensor data is created/generated using the methods of CARLA
§ 5.1 discusses: “ScenarioRunner is a CARLA simulator plugin that enables for the creation and execution of traffic scenarios. A Python interface or the OpenSCENARIO standard can be used to define the scenarios. ScenarioRunner can also be used to prepare AD agents for their evaluation by allowing them to travel through complex traffic scenarios and routes. ScenarioRunner can be installed using the Carla Scenario runner documentation1” – see the remaining portions for details; as well as § 5.2 for examples of “five scenarios”, e.g. fig. 5.1 which shows the synthetic driving scene in Scenario 3
In summary, Kanakagiri teaches limitation [a] by generating synthetic LIDAR and camera data using the methods of CARLA, [b] by generating the driving scene [e.g. fig. 5.1] by the methods of CARLA, and [c] by the execution of the Autoware stack as detailed above, wherein in doing the simulations there are meshes of varying fidelities.
As to the objects in Kanakagiri, see § 5.1: “Initialize Method: The initialize method is used to set up all of the scenario’s and vehicles’ parameters. This includes picking the right vehicles [notice the plural, and the word “includes”], spawning them in the right place, and so on. One can utilize the predefined setup vehicle() function from basic scenario.py to make things easier… Create Behavior Method: This function should create a behavior tree that encapsulates the non-ego vehicle’s behavior throughout the scenario.” – e.g. § 5.2, see scenarios 2-5, each of which include an object to be tested, e.g. “a cyclist”; “a pedestrian/cyclist”; “user-controlled ego vehicle follows a leading car driving down a given road. At some point the leading car slows down and finally stops. The ego vehicle has to react accordingly to avoid a collision. Scenario 5 (Other Leading Vehicle9: The user-controlled ego vehicle follows a leading car driving down a given road. At some point the leading car has to decelerate. The ego vehicle has to react accordingly by changing lane to avoid a collision and follow the leading car in other lane.” – e.g. fig. 5.1 depicts a rendered simulated pedestrian model
The use of the fidelities in the manner claimed is inherent in the LIDAR simulation of Kanakagiri, and if not it would be obvious to try/simply combining known elements according to known methods for a predictable result (rationale below).
See CARLA, user guide, as was referenced extensively in Kanakagiri, for version 0.9.10 of CARLA as referenced in Kanakagiri
As was noted above in Kanakagiri “Sensor emulation” on page 17: “A rotating sensor is implemented as a ray-cast lidar in Carla”; but Kanakagiri defers to CARLA’s manual, e.g. footnote # 5 on page 42 along with ¶ 2 and indicates it merely uses CARLA’s implementation of a “virtual LiDAR sensor”
So see CARLA, section “Generate detailed colliders”: “This tutorial explains how to create more accurate collision boundaries for vehicles (relative to the original shape of the object). These can be used as physics collider, compatible with collision detection, or as a secondary collider used by raycast-based sensors such a the LIDAR to retrieve more accurate data… Raycast colliders — This approach requires some basic 3D modelling skills. A secondary collider is added to the vehicle so that ray cast-based sensors such as the LIDAR retrieve more precise data.” – step 1: “First of all, the original mesh [high fidelity used in other rendering tasks in the scene] of the vehicle is necessary to be used as reference. For the sake of learning, this tutorial exports the mesh of a CARLA vehicle”, step 2: “Generate a low density mesh”, step 2.1: “Open a 3D modelling software and, using the original mesh as reference, model a low density mesh that stays reliable to the original” – then, see step 4, in particular: “Select the CustomCollision element and add the SM_sc_<model_of_vehicle>.fbx in the Static mesh property”, with the below screen capture demonstrating that this is in the Unreal Engine GUI (right-hand side, assignment of static mesh). And in particular, note: “For vehicles such as motorbikes and bicycles, change the collider mesh of the vehicle itself using the same component” – i.e. each “CARLA vehicle” to be detected by LIDAR needs a similar low-density mesh, hence the note; and first sentence in this tutorial: “This tutorial explains how to create more accurate collision boundaries for vehicles [in the plural] (relative to the original shape of the object).”
PNG
media_image2.png
200
400
media_image2.png
Greyscale
To clarify, see the Sensors reference portion of the User Guide, subsection: “LIDAR sensor”: “This sensor simulates a rotating LIDAR implemented using ray-casting. The points are computed by adding a laser for each channel distributed in the vertical FOV. The rotation is simulated computing the horizontal angle that the Lidar rotated in a frame. The point cloud is calculated by doing a ray-cast for each laser in every step.”. Also, note the other sensors that CARLA was able to simulate in 2021, including multiple types of cameras
For further clarification, in CLARA, section “Retrieve simulation data: “First, the simulation is initialized with custom settings and traffic. An ego vehicle is set to roam around the city, optionally with some basic sensors. The simulation is recorded, so that later it can be queried to find the highlights. After that, the original simulation is played back, and exploited to the limit. New sensors can be added to retrieve consistent data. The weather conditions can be changed. The recorder can even be used to test specific scenarios with different outputs.” – e.g. as was used in Kanakagiri § 5.2, then see the subsection on “LIDAR raycast sensor”, in particular see the figure at step 3 which shows the “LIDAR output after being processed in Meshlab”
PNG
media_image3.png
471
676
media_image3.png
Greyscale
Thus, the LIDAR simulation in CARLA of Kanakagiri for driving around as was discussed above used inherently a low-poly/low-resolution mesh for the LIDAR generation of the synthetic point cloud data because Kanakagiri used the ray-cast LIDAR of CARLA, wherein the driving scene was rendered with other meshes.
Elsewise, should it be found it was not inherent, it would have been obvious to try this (given there are only two collider types for sensors in CARLA; and only two types of LIDAR virtual sensors listed in CARLA’s sensor reference, i.e. a “LDIAR sensor” and a “Semantic LIDAR sensor”) and merely combining known elements in known ways, i.e. POSITA merely need to refer to the user guide of CARLA to arrive at this combination, and follow its known methods to achieve its known elements. Also, they would have been motivated because “A secondary collider is added to the vehicle so that raycast-based sensors such as the LIDAR retrieve more precise data.” (Carla, section on “Generate detailed colliders”)
There is however at least a single distinction because Kanakagiri fig. 5.1 depicts what CARLA describes as a “low” graphics quality, i.e. Kanakagiri does not expressly anticipate the high-fidelity mesh, but rather only that there are two different mesh fidelities for the two simulations.
However, this would have been obvious in view of Kanakagiri in view of CARLA:
See Kanakgiri, fig. 5.1, which shows that the “low” mode was used (note the sky was not rendered; same as CARLA visibly depicts in the “low” mode); - when read in view of CARLA section on “How to model vehicles”, in particular: “All vehicle LODs must be made in Maya or other 3D software. Because Unreal does not generate LODs automatically, you can adjust the number of Tris to make a smooth transitions between levels… other All vehicle LODs must be made in Maya or 3D software. Because Unreal does not generate LODs automatically, you can adjust the number of Tris to make a smooth transitions between levels. Level 0 – Original Level 1 - Deleted 2.000/2.500 Tris (Do not delete the interior and steering wheel) Level 2 - Deleted 2.000/2.500 Tris (Do not delete the interior) Level 3 - Deleted 2.000/2.500 Tris (Delete the interior) Level 4 - Simple shape of a vehicle. [different fidelities/LODs of the mesh for the rendering]” followed by section “Graphics quality”: “CARLA also allows for two different graphic quality levels. Epic, the default is the most detailed [most detailed meshes of objects rendered]. Low disables all post-processing and shadows, the drawing distance is set to 50m instead of infinite.” – as visibly depicted, wherein one can visually see in the “Low mode” that the driving scene is using much lower fidelity meshes [lower LODs] for all objects as compared to the “Epic mode”, i.e. the distinction in these modes as POSITA would have readily know/recognized was what LOD on the models were used, and the culling distance (i.e. how far away objects must be before they are culled/removed from the rendering: “Low disables all post-processing and shadows, the drawing distance is set to 50m instead of infinite.”
Therefore, it would have been obvious to try the “EPIC” mode of CARLA from this finite list of two modes, e.g. by using a computer with a better GPU, and POSITA would further/also have been motivated to use the “Epic” mode because it is “Epic”, i.e. it “is the most detailed” rendering.
In doing so, the LIDAR sensor simulation would still be used the low-density mesh as discussed above for its collider, while the rendering of the scene would use high fidelity meshes.
Regarding Claim 2
Kanakagiri in view of CARLA teaches:
The computer-implemented system of claim 1, wherein the low-fidelity mesh data includes a smaller number of at least one of vertices, faces, bones, or polygons than the high- fidelity mesh data. (Kanakagiri, in view of Carla as discussed above, teaches this, as it is to “Generate a low density mesh” (CARLA, as cited above)
Regarding Claim 3
Kanakagiri in view of CARLA teaches:
The computer-implemented system of claim 2, wherein the smaller number of the at least one of vertices, faces, bones, or polygons of the low-fidelity mesh data is based on a characteristic of the object. (Kanakagiri, in view of Carla as discussed above, teaches this, as it is to “Generate a low density mesh” (CARLA, as cited above), and “2.1 Open a 3D modelling software and, using the original mesh as reference, model a low density mesh that stays reliable to the original.”)
Regarding Claim 5
Kanakagiri in view of CARLA teaches:
The computer-implemented system of claim 1, wherein the generating the synthetic sensor data comprises generating light detection and ranging (LIDAR) return signals based on the low-fidelity mesh data representing the object. (Kanakagiri, in view of Carla as discussed above, teaches this, as it is to “Generate a low density mesh” (CARLA, as cited above), and “2.1 Open a 3D modelling software and, using the original mesh as reference, model a low density mesh that stays reliable to the original.”, wherein this “mesh” is used for the “Raycast collider” to implement the raycast LIDAR sensor
To clarify, Kanakagiri, page 42: “To create a point-cloud of a region, a test vehicle mounted with LiDAR is driven through this region. While driving the LiDAR sensor records the data of the environment by emitting and detecting the light rays. The generated point-clouds are stitched together to create the point-cloud map. Since the map used in this project is not of any particular region but an amalgamation of multiple regions in and around the city of Ingolstadt and lack of availability of test vehicle this method cannot be used to create the point cloud. Instead of using test vehicle to create the point-cloud, a virtual vehicle is driven in CARLA simulator(in the generated map) and this vehicle is mounted with virtual LiDAR sensor which collects the point-clouds. The same method is used but all the operations are performed inside the CARLA simulator.” – and see CARLA, sensor reference section on the “LIDAR sensor”: “This sensor simulates a rotating LIDAR implemented using ray-casting. The points are computed by adding a laser for each channel distributed in the vertical FOV. The rotation is simulated computing the horizontal angle that the Lidar rotated in a frame. The point cloud is calculated by doing a ray-cast for each laser in every step.”- to clarify, note that this is using a “collider” for LIDAR, i.e. its emitting the lasers as ray casts, and detecting when they collide with the static low-poly meshes, hence: “Select the CustomCollision element and add the SM_sc_<model_of_vehicle>.fbx in the Static mesh property.” And its note: “For vehicles such as motorbikes and bicycles, change the collider mesh of the vehicle itself fusing the same component.”, generating a point cloud such as the one depicted in the LIDAR section of the sensor reference below:
PNG
media_image4.png
200
400
media_image4.png
Greyscale
Regarding Claim 6
Kanakagiri in view of CARLA teaches:
The computer-implemented system of claim 1, wherein the operations further comprise:
generating a synthetic image of the synthetic driving scene based at least in part on the high-fidelity mesh data representing the object. (Kanakagiri, as was discussed above in view of CARLA, noting in particular the figures cited to showing the driving scene)
Regarding Claim 7
Kanakagiri in view of CARLA teaches:
The computer-implemented system of claim 1, wherein the executing the vehicle compute process comprises:
determining a perception of the object based on the synthetic sensor data generated based on the low-fidelity mesh data; and determining at least one of a prediction, a path, or a vehicle control based on the perception and the synthetic driving scene. (Kanakagiri in view of CARLA as discussed above, including page 42 of Kanakagiri as was taken in view of CARLA for how the LIDAR in CARLA is implemented with the low-poly mesh, and fig. 5.1 an example of a driving scene simulated:” “Scenario 3 (Dynamic Object Crossing): this scenario a pedestrian/cyclist suddenly moves from one end of the road to other in front of the ego vehicle path”, and fig. 4.11 for the “Autoware” which is the “Software stack for Perception, Localization, Planning, Control” wherein at fig. 4.14 it clarifies, in particular note the exchange of the “sensor data” and corresponding “vehicle control”.
PNG
media_image5.png
200
400
media_image5.png
Greyscale
Regarding Claim 8
Kanakagiri in view of CARLA teaches:
The computer-implemented system of claim 1, wherein the operations further comprise:
reading, from a mesh object library, the high-fidelity mesh data and the low-fidelity mesh data. (Kanakagiri, as discussed above in view of CARLA, i.e. Kanakagiri creates the models of objects in Blender in § 4.2.1 and imports them into CARLA in § 4.2.2 for a high fidelity mesh of at least the vehicle wherein Kanakagiri § 4.2.2 notes: “The 3D vehicle model generated in Blender serves as Skeletal Mesh in CARLA simulator. The Skeletal Mesh is used to create an Animation Blueprint, Vehicle Physics Assets, Vehicle Blueprint… During simulation if any other actor’s collision boundary enters the MuSHR’s collision boundary a collision event is detected. Animation Blueprint helps to detect the pose based on the bones defined and thereby final pose of Skeletal mesh is set for every frame. A low density, low poly mesh produced from the real car model is referred to as a static mesh.” And page 40: “While creating the blueprints for the wheels by changing the Shape of the Collision mesh to Wheel Shape it will be by default as cylinder the issue is resolved “– and CARLA, section on generating Colliders, specifically steps 3-4 clarify on this – of particular note, step 3.1: “Content/Carla/Static/Vehicles/4Wheeled/<model_of_vehicle>” which comprises a library of the meshes – see step 3.2 to clarify; and see in the GUI in step 4 the left-hand side which shows the “Mesh” object and its library of different elements, including this “CustomCollision” element
to further clarify, in CARLA see the section on “How to model vehicles”, in particular: “All vehicle LODs must be made in Maya or other 3D software. Because Unreal does not generate LODs automatically, you can adjust the number of Tris to make a smooth transitions between levels… All vehicle LODs must be made in Maya or other 3D software. Because Unreal does not generate LODs automatically, you can adjust the number of Tris to make a smooth transitions between levels. Level 0 – Original Level 1 - Deleted 2.000/2.500 Tris (Do not delete the interior and steering wheel) Level 2 - Deleted 2.000/2.500 Tris (Do not delete the interior) Level 3 - Deleted 2.000/2.500 Tris (Delete the interior) Level 4 - Simple shape of a vehicle. [different fidelities/LODs of the mesh for the rendering]” followed by section “Graphics quality”: “CARLA also allows for two different graphic quality levels. Epic, the default is the most detailed [most detailed meshes of objects rendered]. Low disables all post-processing and shadows, the drawing distance is set to 50m instead of infinite.” – as visibly depicted, wherein one can visually see in the “Low mode” that the driving scene is using much lower fidelity meshes [lower LODs] for all objects as compared to the “Epic mode”, i.e. the distinction in these modes as POSITA would have readily know/recognized was what LOD on the models were used, and the culling distance (i.e. how far away objects must be before they are culled/removed from the rendering: “Low disables all post-processing and shadows, the drawing distance is set to 50m instead of infinite.”) – i.e. in “Epic” mode it uses the highest LODs [mesh fidelities] from the library, and in “low” mode it uses the lowest from the library, as visibly depicted
Regarding Claim 9
Kanakagiri in view of CARLA teaches:
The computer-implemented system of claim 1, wherein the one or more processing units comprises:
at least one central processing unit (CPU), wherein the generating the synthetic sensor data is performed by the CPU, at least one graphical processing unit (GPU), wherein the generating the synthetic driving scene is performed by the GPU.
Kanakagiri/CARLA teaches this. Kanakagiri, page 20, table 2.3 lists the hardware requirements, wherein CARLA requires a “GPU”, e.g. an “NVIDIA GeForce 470 GT”, along with a “Quad core [CPU]” – to clarify, in CARLA, section “Windows build” it lists the “Requirements”: “An adequate GPU. CARLA aims for realistic simulations, so the server needs at least a 4GB GPU. A dedicated GPU is highly recommended for machine learning.” – as later clarified in the “Rendering options” section, subsection: “Off-screen vs no-rendering”: “In off -screen , Unreal Engine is working as usual, rendering is computed. Simply, there is no display available. GPU sensors return data when off -screen, and no-rendering mode can be enabled at will.” – i.e. the GPU is doing the rendering (generating the scene), CPU is doing the other processing.
Regarding Claim 11.
Rejected under a similar rationale as claim 1.
Regarding Claim 12.
Rejected under a similar rationale as claim 2.
Regarding Claim 15.
Kanakagiri in view of CARLA teaches:
The apparatus of claim 11, wherein the sensor simulator further:
generates the LIDAR data by applying a LIDAR sensor simulation model to the low- fidelity mesh data; (Kanakagiri, as was taken in view of CARLA above teaches this)
and generates an image of the driving scene by applying a camera sensor simulation model to at least the high-fidelity mesh data. (Kanakagiri, § 3.1: “A RGBD camera (Intel Realsense D435i), a Laser scanner (YLIDAR X4) for distance measurements.” Followed by § 3.3: “The 3D model of the MuSHR is built in Blender with the Camera and LiDAR sensors.” And § 4.2.1: “The components or blend files provided from MuSHR vehicle development team[18] are assembled in the Blender environment. The Chassis of the Vehicle is assembled with wheels, Camera and LiDAR sensors.” Then § 4.2.2: “The detailed instruction of a Vehicle model are well documented in CARLA documentation page [20]. [the Examiner noting an obvious typo, as reference # 19 is the documentation page for CARLA] to “Add a new vehicle”” – to clarify, § 4.3: “Since MuSHR is in 1/10 the actual size of an vehicle the HD map in which it drives/operated can be modelled in 1/10 scale or the MuSHR can be scaled up (10 times) in the virtual environment” and § 4.3.2: “All the dimensions of MuSHR like track-width, wheelbase, chassis dimension, camera and lidar dimensions are multiplied by a factor of 10 and the vehicle model is changed.” – i.e. this at least suggests the CARLA implemented the camera for use in the scenario simulations (§ 5.2, e.g. scenarios 2-5)
in view of CARLA, section “Sensors reference” which teaches the use of “Blueprint: sensor.camera.depth” to implement a depth camera (an RGBD camera) and see its discussion; also there are other camera variants that would have been obvious to try in this finite list, e.g. the “RGB camera” wherein “The "RGB" camera acts as a regular camera capturing images from the scene [the high fidelity mesh].”, and ntoes various “effects” can be added (see the tables for its attributes)
PNG
media_image6.png
200
400
media_image6.png
Greyscale
Thus, it would have at least been suggested/obvious to arrive at the presently claimed invention in view of Kanakagiri and CARLA as cited above, because it would have at least been obvious to try to implement the camera models of CARLA in the simulated driving scenarios in CARLA of Kanakagiri, wherein Kanakagiri already has a camera on the vehicle, and all that is left is simply choosing which camera model to use in the simulation from the list of available models in CARLA (already used in Kanakagiri), to arrive at a predictable result.
Regarding Claim 16.
Kanakagiri in view of CARLA teaches:
The apparatus of claim 15, wherein the vehicle simulator simulates the at least one of the operation or the behavior of the vehicle by:
determining at least one of a perception, a prediction, a path, or a control decision based on the image and the LIDAR data. (Kanakagiri in view of CARLA as discussed above for claim 1 and 15 teaches this, e.g. see fig. 4.11, for the execution of the compute process see the “Autoware” which is the “Software stack for perception, location, planning, control” to be executed, see § 4.5 and fig. 4.14 which clarifies that this is exchanging “Sensor data” with “Vehicle control” and the like as detailed in § 4.5
To clarify, see page 42 of Kanakagiri as was taken in view of CARLA for how the LIDAR in CARLA is implemented with the low-poly mesh, and fig. 5.1 an example of a driving scene simulated:” “Scenario 3 (Dynamic Object Crossing): this scenario a pedestrian/cyclist suddenly moves from one end of the road to other in front of the ego vehicle path”, and fig. 4.11 for the “Autoware” which is the “Software stack for Perception, Localization, Planning, Control” wherein at fig. 4.14 it clarifies, in particular note the exchange of the “sensor data” and corresponding “vehicle control”.
PNG
media_image5.png
200
400
media_image5.png
Greyscale
Regarding Claim 17.
Rejected under a similar rationale as claims 1, 15-16 as discussed above.
Regarding Claim 18.
Rejected under a similar rationale as claim 2 above.
Regarding Claim 20.
Rejected under a similar rationale as claim 16 above.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kanakagiri, Abhishek. Development of a virtual simulation environment for autonomous driving using digital twins. Diss. Technische Hochschule Ingolstadt, 2021 in view of CARLA, User Guide, for version 0.9.10 (dated Sept. 25, 2020 per revision log) of CARLA, URL: carla(dot)readthedocs(dot)io/en/0(dot)9(dot)10/ and in view of Isteník, Matej. "Large boss characters in Unity engine: Reimplementation of gameplay mechanics from Shadow of the Colossus video game." (2015).
Regarding Claim 4
Kanakagiri in view of Carla teaches:
The computer-implemented system of claim 1, wherein the object represented by the high-fidelity mesh data … mesh data includes a human. (Kanakagiri, § 5.2: “Scenario 3 (Dynamic Object Crossing): this scenario a pedestrian/cyclist suddenly moves from one end of the road to other in front of the ego vehicle path” – and § 3.3: “The simulation environment coupled with a Software Stack and Middleware provides the complete environment in which different aspects of AD [autonomous driving] such as Perception, Localization, Planning and Control can be tested.” – and fig. 5.1, i.e. POSITA would have inferred, or at least been suggested, that the test was whether the autonomous vehicle would brake/steer away from the pedestrian shown in fig. 5.1 so as to not run them over (note fig. 5.1 shows the GUI which includes status indicator for the “Break” as a percent bar at 0%, wherein the ”throttle” is a full bar, i.e. it was at full throttle and appears to be about to run them over, which presumably would fail the test.
Note § 4.2.2 as well: “CARLA Simulator is built using Unreal Engine, it implies that the physics of any actor inside the simulation is based upon Unreal Engine. A Physics Asset defines the collision boundary of any actor.” And page 36: “CARLA has both static and dynamic actors in it’s simulation environment. The dynamic actors are those actors which move in the simulation, ex: vehicles, pedestrians, traffic lights”
As taken in view of CARLA as cited above for the high-fidelity data (as its clearly rendered).
PNG
media_image7.png
200
400
media_image7.png
Greyscale
However, Kanakagiri in view of Carla does not explicitly teach that the low-fidelity data includes a human, but it does teach the that ray-tracing simulation is by using a custom collider with low poly meshes for the collision detections with other vehicles, including bicycles and motorcycles (CARLA, section “Generate detailed colliders”, the Note near the end: “For vehicles such as motorbikes and bicycles, change the collider mesh of the vehicle itselfusing the same component”)
This feature would have been obvious when Kanakagiri in view of Carla was taken in view of Istenik, § 3.2, in particular: “For example, in the case that bodies of a humanoid characters should not collide with each other, just a simplified shape as circle, box or capsule can be used, as can be seen on the figure 8 (page 10). In other cases, much more detailed collision information can be needed, such as detection of which part of an object participate in the collision (humanoid’s arm, vehicle’s component, etc). In this case, mesh colliders or multiple primitive colliders (known as compound collider) can be used to form the shape of the object, as can be seen on the figure 9 (page 10).”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings from Kanakagiri in view of Carla on the use of mesh colliders for LiDAR simulation in AV simulations with pedestrians with the teachings from Istenik on using mesh colliders for humans (to clarify, in this combination POSITA would still have found it obvious to use a low-poly mesh for the collision mesh for the human, as they would be used the LiDAR simulation method of CARLA which did this). The motivation to combine would have been that mesh colliders provide “much more detailed collision information” such as allowing detection of collisions with a “humanoid’s arm”, e.g. in the AV simulations discussed above, this would provide more information on where each part of the pedestrian was, e.g. such as for when the pedestrian was a hitch-hiker sticking their arm out into the road, or a person calling a taxi with their arm out in the road.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kanakagiri, Abhishek. Development of a virtual simulation environment for autonomous driving using digital twins. Diss. Technische Hochschule Ingolstadt, 2021 in view of CARLA, User Guide, for version 0.9.10 (dated Sept. 25, 2020 per revision log) of CARLA, URL: carla(dot)readthedocs(dot)io/en/0(dot)9(dot)10/ and in view of Manivasagam, Sivabalan, et al. "Lidarsim: Realistic lidar simulation by leveraging the real world." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
Regarding Claim 10.
While Kanakagiri in view of CARLA does not explicitly teach the following, Kanakagiri in view of CARLA and further view of Manivasagam teaches:
The computer-implemented system of claim 1, wherein the generating the synthetic driving scene is based on data captured from a real-world driving environment. (Kanakagiri, as was discussed in view of CARLA above, taken in further view of Manivasagam:
Manivasagam, § 1: “More recently, advanced real-time rendering techniques have been exploited in autonomous driving simulators, such as CARLA and AirSim [8, 33]. However, their virtual worlds use handcrafted 3D assets and simplified physics assumptions resulting in simulations that do not represent well the statistics of real-world sensory data, resulting in a large sim-to-real domain gap. Closing the gap between simulation and the real-world requires us to better model the real-world environment and the physics of the sensing processes. In this paper we focus on LiDAR, as it is the sensor of preference for most self-driving vehicles since it produces 3D point clouds from which 3D estimation is simpler and more accurate compared to using only cameras. Towards this goal, we propose LiDARsim, a novel, efficient, and realistic LiDAR simulation system. We argue that leveraging real data allows us to simulate LiDAR in a more realistic manner. LiDARsim has two stages: assets creation and sensor simulation (see Fig. 2). At assets creation stage, we build a large catalog of 3D static maps and dynamic object meshes by driving around several cities with a vehicle fleet and accumulating information over time to get densified representations. This helps us simulate the complex world more realistically compared to employing virtual worlds designed by artists.”
To clarify, § 2: “We believe one reason for this domain gap is that the artist-generated environments are not diverse enough and the simplified physics models used do not account for important properties for sensor simulation such as material reflectivity or incidence angle of the sensor observation, which affect the output point cloud” – see § 3.1 to further clarify on the 3D mapping into the 3D mesh, including: “Towards this goal, we collected data by driving over the same scene multiple times. On average, a static scene is created from 3 passes.”, followed by § 3.2 for the “3D Reconstruction of Objects for Simulation”: “To create realistic scenes, we also need to simulate dynamic objects, such as vehicles, cyclists, and pedestrians. Similar to our maps in Sec. 3.1, we leverage the real world to construct dynamic objects, where we can encode complicated physical phenomena not accounted for by ray casting via the recorded geometry and intensity metadata.”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings from Kanakagiri in view of CARLA on a using CARLA for simulating AVs with the teachings from Manivasagam on the “assets creation” stage. The motivation to combine would have been that “This helps us simulate the complex world more realistically compared to employing virtual worlds designed by artists” and “We believe one reason for this domain gap is that the artist-generated environments are not diverse enough” (Manivasagam, as cited above).
Claim(s) 13-14 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kanakagiri, Abhishek. Development of a virtual simulation environment for autonomous driving using digital twins. Diss. Technische Hochschule Ingolstadt, 2021 in view of CARLA, User Guide, for version 0.9.10 (dated Sept. 25, 2020 per revision log) of CARLA, URL: carla(dot)readthedocs(dot)io/en/0(dot)9(dot)10/ and in view of Orangemittens, “How to Reduce Mesh Poly Count”, Forum Post on Sims 4 studio forum, March 23rd, 2015.
Regarding Claim 13.
While Kanakagiri in view of CARLA does not explicitly teach the following, it is taught when Kanakagiri is taken in view of CARLA and Orangemittens teaches:
The apparatus of claim 12, wherein the smaller number of the at least one vertices, edges, faces, or polygons is based on a configuration parameter.
Kanakagiri, as discussed above for its use of CARLA’s raycast LIDAR, in view of CARLA describing how it was setup, in particular CARLA section “Raycast colliders” as discussed above, step 2.1: “Open a 3D modelling software and, using the original mesh as reference, model a lowdensity mesh that stays reliable to the original”
Wherein Kanakagiri used “Blender” as well as “Unreal Editor” - § 3.3 of Kanakagiri, as further detailed in § 4.2.1 including: “Vehicle Skeletal Mesh is a feature in the FBX import pipeline that allows you to transfer animated meshes from 3D modeling software like Blender to Unreal Editor in a seamless manner…. To include physics and vehicle dynamics constraints into the model, the blender ready 3D vehicle model is imported into Carla Unreal Editor 4.24.”, then see § 4.2.2
But Kanakagiri, in view of CARLA, does not explicitly teach that the low poly mesh is based on a configuration software, rather they merely just do using 3D modeling software, e.g. Blender. Kanakagiri, in view of CARLA and in further view of Orangemittens teaches this though.
Orangemittens teaches in “Blender 2.70” [note Kanakagiri is already using Blender for their modeling software and that the meshes are imported from Blender, wherein CARLA as cited above teaches that the meshes are imported from a 3D modeling software, e.g. Blender] that “This tutorial will show you how to use Blender's Decimate feature to reduce poly count in your mesh. It will not show you how to create or map the mesh. Although this tutorial uses an object as the example Blender's decimate feature works the same way on any mesh whether it's an object or a CAS item.” – and see instructions: “1. Open your mesh in Blender and right mouse click to select it. 2. Click the Object Modifiers tab (looks like a small wrench). 3. Click Add Modifier. 4. Choose Decimate from the modifiers menu. 5. Lower the ratio by sliding your mouse to the left in this box. Alternatively you can enter the ratio you want [example of a configuration parameter]. You will notice the face count and the Verts | Face counts going down as you reduce the ratio. You will also notice that your mesh begins to become deformed the further down you take the number. ***Note: the second highest LOD is seen at a fairly close distance by people with their graphics settings set low so don't reduce the poly so much that the mesh is really deformed for that LOD. Use EA's .package set up as your guide.*** 6. Once the count is lowered to your satisfaction click the Apply button. This must be done in Object Mode. If you don't click the Apply button the change you made will not take effect. 7. If your object has more than one mesh repeat these steps for the other mesh. You can even reduce the poly count of the shadow plane a small amount if it has many vertices in it. Don't forget to click the Apply button. 8. Click File and Save As. Name the new .blend something to set it apart from your high LOD mesh. 9. Back in Studio set the LOD menu to LOD 1 (Medium) and click the Import Mesh button.”
To further clarify, see the GUI as was annotated by Orangemittens at # 5.
PNG
media_image8.png
200
400
media_image8.png
Greyscale
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings from Kanakagiri in view of CARLA on a system which used the CARLA simulator with Blender modeling software and generates low-poly meshes with the modeling software (i.e. Blender/Unreal editor, in present combination) with the teachings from Orangemittens on a tutorial on “How to Reduce Mesh Poly Count” in “Blender”. KSR rationale of combining know elements using known methods applies, i.e. this merely requires that the user to create the low poly mesh for the mesh collider for raytracing LIDAR collision meshes goes to the same software Kanakagiri used for generating initial meshes, and use a feature found in that software’s GUI (note Orangemittens provides a series of screen shots to show how simple this is to do).
POSITA would also have been motivated to do this because its very simple to do using the same software packages already in use in Kanakagiri, wherein CARLA already states that “Open a 3D modelling software [e.g. Blender] and, using the original mesh as reference, model a low density mesh that stays reliable to the original” at step 2.1 for generating a low density mesh followed by “2.2 Save the new mesh as FBX.” – i.e. Kanakagiri in view of Carla already at least suggests doing the act of making the low-poly mesh using another modeling software.
Regarding Claim 14.
Rejected under a similar rationale as claim 13 above, wherein this claim merely requires the “Ratio” to be set in the GUI of Blender so as to reduce this below 30% - to clarify, in Orangemittens step 5: “Alternatively you can enter the ratio you want. You will notice the face count and the Verts | Face counts going down as you reduce the ratio. You will also notice that your mesh begins to become deformed the further down you take the number.
Regarding Claim 19.
Rejected under a similar rationale as claims 13-14 above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hu, Yuchao, and Wei Meng. "ROSUnitySim: Development and experimentation of a real-time simulator for multi-unmanned aerial vehicle local planning." Simulation 92.10 (2016): 931-944. Abstract, and § 2 ¶ 1 noting the use of “Unity3D” (the game engine) then see § 2.1.1: “In order to improve the confidence level of the simulation system, 3D virtual environments need to be modeled as realistically as possible. There are two parts to consider when modeling objects in the simulator. The first is mesh, which determines the visual shape of the object: The more meshes used, the more accurate the depiction of the object. But meshes require computer resources, so a balance should be considered based on all requirements. For visualization purposes, the corresponding textures will be added to the surface of the mesh. Figure 3(a) shows the tree’s meshes (in blue) and the rendered results with textures in Unity3D. The other is a collider, which is used for raycasting, collision detection and other physic simulations. Just like the mesh, the detail of the collider should be modeled according to requirements. Reducing the detail level of the mesh or the collider may be an alternative choice if Unity3D takes up too much computer resource”
Ang, Jun Wei Dickson, et al. "Big data scenarios simulator for deep learning algorithm evaluation for autonomous vehicle." GLOBECOM 2020-2020 IEEE Global Communications Conference. IEEE, 2020.. Abstract and §§ I-II, including fig. 2-8.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID A. HOPKINS whose telephone number is (571)272-0537. The examiner can normally be reached Monday to Friday, 10AM to 7 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/David A Hopkins/ Primary Examiner, Art Unit 2188