Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to an application filed on 4/17/2024. Claims 1-11 are pending.
Information Disclosure Statement
The information disclosure statement submitted on 4/17/2024 have been considered by the
Examiner and made of record in the application.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1
Claims 1-5 are directed to a method of generating a destination for an emergency response of an autonomous vehicle (i.e., a process). Therefore, claims 1-5 are within at least one of the four statutory categories.
Claims 6-10 are directed to a system for generating a destination for an emergency response of an autonomous vehicle, (i.e., a machine). Therefore, claims 6-10 are within at least one of the four statutory categories.
Claims 11 is directed to An autonomous driving system for controlling an autonomous vehicle (i.e., a machine). Therefore, claims 11 are within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed
to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity,
and/or c) mental processes.
Independent claim 46 includes limitations that recite an abstract idea (mental process)
and will be used as a representative claim for the remainder of the 101 rejections. Claims 1, 6, and 11 recite:
generating forward perception information based on data collected from a sensor mounted on the autonomous vehicle; setting a destination generation area based on the forward perception information; generating a candidate destination in the destination generation area based on the forward perception information and a current heading range of the autonomous vehicle; and when the candidate destination is provided as a plurality of candidate destinations, selecting one destination from among the candidate destinations based on a maximum vertical movement distance of each of the candidate destinations.
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “…generating forward perception information …” in the context of this claim encompasses a person observing collected data and then writing down or simply actualizing what was seen, with pen and paper or simply with their own mental ability. Another example, “…setting a destination generation area …”, “…generating a candidate destination in the destination generation area…”, and “…selecting one destination from among the candidate destinations…” in the context of this claim encompasses a person designating points on a map regarding specifying a destination. , with pen and paper or simply within the mental. Lastly, “when the candidate destination is provided as a plurality of candidate destinations” is simply furthering the mental process of designating a destination point and instead of doing it for singular route it is done for a plurality of routes and thus is simply something that can be done with pen and paper or within the mental. Essentially, the system is just a way of updating a vehicle route. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
generating forward perception information based on data collected from a sensor mounted on the autonomous vehicle; setting a destination generation area based on the forward perception information; generating a candidate destination in the destination generation area based on the forward perception information and a current heading range of the autonomous vehicle; and when the candidate destination is provided as a plurality of candidate destinations, selecting one destination from among the candidate destinations based on a maximum vertical movement distance of each of the candidate destinations.
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitations of, “based on data collected from a sensor mounted on the autonomous vehicle”, “based on the forward perception information”, “based on the forward perception information and a current heading range of the autonomous vehicle.” and “based on a maximum vertical movement distance of each of the candidate destinations” the examiner submits that these limitations are insignificant extra-solution activities that merely use a computer (‘system’) to perform the mental process. Each of the above cited limitations are simply further defining the mental process and explaining at what point is data collected after the generation and selecting steps and are recited at a high level of generality (i.e., as a general means of gathering vehicle data for use in the updating step), and amounts to mere data gathering, which is a form of insignificant extra-solution activity. The system, is recited at a high level of generality and merely automates the route generation step.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond
generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above, the additional limitations of “during operation of a motor vehicle”, “when it is detected that the motor vehicle has driven from a road onto a facility for stationary traffic”, “providing the updated digital map material.” and “based on the determined departure point;” the examiner submits that these limitations are insignificant extra-solution activities.
Further, a conclusion that an additional element is insignificant extra-solution activity in
Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well understood,
routine, conventional activity in the field. The additional limitations of “… based on…” are well-understood, routine, and conventional activities because the specification recites that the components are all conventional computer components mounted on the vehicle, and the specification does not provide any indication that the system is anything other than a conventional computer within a vehicle. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner.
Dependent claims 2-5 & 7-10 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional
aspects of the judicial exception and/or well-understood, routine and conventional additional
elements that do not integrate the judicial exception into a practical application. Claim 2 and 7 mentions “…excluding lane information from the forward perception information”, which would fail under Step 2A Prong one as a mental process as the method and system respectively are just determining data points that should be included in the dataset and thus is something a human mind is capable of doing with pen and paper which would not make claims 2 and 7 to be considered patent eligible subject matter. Claim 3 and 8 mentions “…perceiving an object…”, which would fail under Step 2A Prong one as a mental process as the method and system respectively are just using the sense of perception and thus is something a human mind is capable of doing which would not make claims 3 and 8 to be considered patent eligible subject matter. Claim 4 and 9 mentions “…generating of the candidate destination…”, which would fail under Step 2A Prong one as a mental process as the method and system respectively are just creating a map and acting upon a direction they should take via the created map and thus is something a human mind is capable of doing with pen and paper which would not make claims 4 and 9 to be considered patent eligible subject matter. Claim 5 and 10 mentions “…selecting of the destination…”, which would fail under Step 2A Prong one as a mental process as the method and system respectively are just making mental judgments about taking a route and thus is something a human mind is capable of doing with pen and paper which would not make claims 2 and 7 to be considered patent eligible subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 4, 6, 8, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Kumano et al. (US 11,667,281 B2) in view of Floor (US 2023/0339505 Al).
Regarding Claim 1 Kumano teaches A method of generating a destination for an emergency response of an autonomous vehicle, (Pg. 1 – Abstract – “A vehicle control method includes recognizing an object, generating a target trajectory of a vehicle, and automatically controlling driving of the vehicle on the basis of the target trajectory, calculating a region between a first virtual line, which passes through a reference point using the vehicle as a reference and a first point present in the vicinity of an outer edge of the object, and a second virtual line, which passes through the reference point and a second point present in the vicinity of the outer edge of the object, as a region through which the vehicle should avoid to travel” (equates to A method of generating a destination for an emergency response of an autonomous vehicle as the quote shows the vehicle detecting an object and thus an emergency is identified wherein a route is generated to avoid said emergency via a trajectory where a destination is contained within. )) the method comprising: generating forward perception information based on data collected from a sensor mounted on the autonomous vehicle; (Pg. 19 – Col. 1 – lines 25-27 – “M. For example, when a side in front of the host vehicle Mis imaged, the camera 10 is attached to an upper section of a front windshield” & See Also Pg. 19 – Col. 4 – lines 53-57 - “The object recognition device 16 recognizes a position, a type, a speed, or the like, of the object by performing sensor fusion processing with respect to the detection result by some or all of the camera 10, the radar device 12, and the LIDAR 14.” (equates to the method comprising: generating forward perception information based on data collected from a sensor mounted on the autonomous vehicle; as the quote shows the front side or forward direction of the vehicle being sensed via a camera and the forward perception information is generated via a sensor fusion result including the camera data as seen from the second quote.)) setting a destination generation area based on the forward perception information; (Pg. 7 – Fig. 8 & See Also Pg. 8 – Fig. 9 & See Also Pg. 9 – Fig. 10 & See Also Pg. 21 – Col. 8 – lines 40-43 – “The risk region calculating part 144 calculates a risk region potentially distributed or present around the object recognized by the recognition part 130 (hereinafter, referred to as a risk region RA).” & See Also Pg. 21 – col. 7 – lines 2-4 – “input from the camera 10, the radar device 12, and the LIDAR 14 via the object recognition device 16. The object recognized by the recognition part 130 includes,” & See Also Pg. 23 – Col. 12 – lines 25-27 – “trajectory generating part 146 inputs a vector or a tensor representing the risk region RA to each of the plurality of DNN models MDL” & See Also Pg. 23 – col. 12 – lines 36-37 – “is a view showing an example of the target trajectory TR output from a certain DNN model MDLl” (equates to setting a destination generation area based on the forward perception information as the figures and combination of quotes show how the previously mapped forward perception information is used to generate a risk area and by inputting a risk area in a deep neural network one can extrapolate the trajectory needed for a vehicle is mitigate any obstacle detected and thus seen by fig. 8 and the trajectory generated a destination are ais generated to be able to move the vehicle through the risk area. )) generating a candidate destination in the destination generation area based on the forward perception (Pg. 7 – Fig. 8 & See Also Pg. 8 – Fig. 9 & See Also Pg. 9 – Fig. 10 & See Also Pg. 21 – Col. 8 – lines 40-43 – “The risk region calculating part 144 calculates a risk region potentially distributed or present around the object recognized by the recognition part 130 (hereinafter, referred to as a risk region RA).” & See Also Pg. 21 – col. 7 – lines 2-4 – “input from the camera 10, the radar device 12, and the LIDAR 14 via the object recognition device 16. The object recognized by the recognition part 130 includes,” & See Also Pg. 23 – Col. 12 – lines 25-27 – “trajectory generating part 146 inputs a vector or a tensor representing the risk region RA to each of the plurality of DNN models MDL” & See Also Pg. 23 – col. 12 – lines 36-37 – “is a view showing an example of the target trajectory TR output from a certain DNN model MDLl” (equates to generating a candidate destination in the destination generation area based on the forward perception information and a current heading range of the autonomous vehicle; as the figure 10 and the last quote show how a trajectory is generated via a risk area assessment and thus a candidate destination is generated based on the forward perception. )) and when the candidate destination is provided as a plurality of candidate destinations, (Pg. 13 – Fig. 14 & See Also Pg. 25 – Col. 16 – lines 1-3 – “That is, as shown, the total four target trajectories TR referred to as TRl, TR2, TR3 and TR4 are generated.” (equates to and when the candidate destination is provided as a plurality of candidate destinations as the quote and figure shows trajectories being generated that have different candidate destinations.)) selecting one destination from among the candidate destinations based on a maximum vertical movement distance of each of the candidate destinations. (Pg. 4 – fig. 3 & See Also Pg. 6 – Fig. 7 & See Also Pg. 13 – Fig. 14 & See Also Pg. 22 – Col. 10 – lines 10 – 18 – “showing a variation in the risk potential p in the X direction at a certain coordinate y4. The coordinate y4 is intermediate coordinates between yl and y2, and the preceding vehicle ml is present at the coordinate y4. For this reason, the risk potential p is highest at the coordinates 15 (x3, y4), the risk potential p at the coordinates (x2, y4) farther from the preceding vehicle ml than the coordinates (x3, y4) is lower than the risk potential at the coordinates (x3, y4),” & See Also Pg. 23 – Col. 12 – lines 19-22 – “generates one or a plurality of target trajectories TR on the basis of the output result of the DNN models MDLl to which the risk region RA is input.” & See Also Pg. 10 – Fig. 11 – s110 – “SELECT OPTIMAL TARGET TRAJECTORY FROM REMAINING TARGET TRAJECTORIES” (equates to selecting one destination from among the candidate destinations based on a maximum vertical movement distance of each of the candidate destinations. As the figures shows the maximum vertical distance the vehicle would travel within a destination area and then assigns risk value to each of the locations. The trajectory that is then selected is based on the maximum vertical distance the vehicle travels within the destination area and it’s associated risk value.))
Yet Kumano fails to teach information and a current heading range of the autonomous vehicle.
Floor teaches information and a current heading range of the autonomous vehicle; (Pg. 24 – [0093] – “constrains a vehicle heading to within a predetermined range of headings with respect to a given point on the reference path 704” ) It would have been an advantageous addition to the method disclosed by Kumano to include and a current heading range of the autonomous vehicle as this allows for a heading and specifically a range of headings to be considered in generating a destination and thus ensures an actual destination can be reached by a vehicle based on the prescribed range.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to include and a current heading range of the autonomous vehicle as this ensures the destination being generated is in line with a comfortable steering maneuver as prescribed by the heading range allowing for a safe feeling to be had by the driver and passengers of the vehicle.
Regarding Claim 3 Kumano- Floor teaches (Kumano teaches the following limitations: ) The method of claim 1, further comprising: perceiving an object in front of the autonomous vehicle based on the forward perception information (Pg. 21 – Col. 7 – lines 2-6 – “input from the camera 10, the radar device 12, and the LIDAR 14 via the object recognition device 16. The object recognized by the recognition part 130 includes, for example, a bicycle, an motorcycle, a four-wheeled automobile, a pedestrian” & See Also Pg. 19 – Col. 1 – lines 25-27 – “M. For example, when a side in front of the host vehicle Mis imaged, the camera 10 is attached to an upper section of a front windshield” (equates to perceiving an object in front of the autonomous vehicle based on the forward perception information as the recognition part 130 does the perceiving from the forward perception data of the camara and sensor fusion result.)) and determining a location and movement direction of the object; (Pg. 19 – Col. 4 – lines 37-45 – “The radar device 12 radiates radio waves such as millimeter waves or the like to surroundings of the host vehicle M, and simultaneously, detects the radio waves (reflected 40 waves) reflected by the object to detect a position (a distance and an azimuth) of at least the object. The radar device 12 is attached to an arbitrary place of the host vehicle M. The radar device 12 may detect a position and a speed of the object using a frequency modulated continuous wave (FM- 45 CW) method” (equates to and determining a location and movement direction of the object as the radar is seen to detect location and direction of the object, as well as, the speed of the object thus a movement direction is attained via the speed and direction detected. )) and setting a risk area based on the location and movement direction of the object and excluding the risk area from the destination generation area. (Pg. 7 – Fig. 8 & See Also Pg. 8 – Fig. 9 & See Also Pg. 9 – Fig. 10 & See Also Pg. 21 – Col. 8 – lines 40-43 – “The risk region calculating part 144 calculates a risk region potentially distributed or present around the object recognized by the recognition part 130 (hereinafter, referred to as a risk region RA).” & See Also Pg. 21 – col. 7 – lines 2-4 – “input from the camera 10, the radar device 12, and the LIDAR 14 via the object recognition device 16. The object recognized by the recognition part 130 includes,” & See Also Pg. 23 – Col. 12 – lines 25-27 – “trajectory generating part 146 inputs a vector or a tensor representing the risk region RA to each of the plurality of DNN models MDL” & See Also Pg. 23 – col. 12 – lines 36-37 – “is a view showing an example of the target trajectory TR output from a certain DNN model MDLl” (equates to and setting a risk area based on the location and movement direction of the object and excluding the risk area from the destination generation area. as the first quote shows the incorporation of the radar data which is previously mapped to the location and movement direction information that is gathered. This information is used for a risk assessment for the vehicle and a trajectory is generated that is minimizing the risk and thus excluding the risk from the trajectory calculation. ))
Regarding Claim 4 Kumano-Floor teaches (Kumano teaches the following limitations:) The method of claim 3, wherein, in the generating of the candidate destination, a point that is reached only after passing through the risk area is excluded from the candidate destination. (Pg. 13 – fig. 14 & See Also Pg. 25 – Col. 16 – lines 35 – 41 - “FIG. 14 is a view showing an example of the excluded target trajectory TR. In the example shown, in the four target trajectories TR, TRl is present inside the traveling avoidance region AAl and TR4 is present inside the traveling avoidance region AA3. In this case, the target trajectory generating part 146 excludes the target trajectory TRl and TR4.” (equates to in the generating of the candidate destination, a point that is reached only after passing through the risk area is excluded from the candidate destination as the quote shows the trajectories tr1 and tr4 going through a risk zone and these trajectories are excluded based on the fact the host vehicle would hit these surrounding vehicles if the trajectories were taken and thus the candidate destination that would be attained via taking either two trajectories is cancelled based on the deemed risk area.) )
Regarding Claim 6 Kumano teaches A system for generating a destination for an emergency response of an autonomous vehicle, (Pg. 1 – title – “VEHICLE CONTROL METHOD, VEHICLE CONTROL DEVICE, AND STORAGE MEDIUM” & See Also Pg. 1 – Abstract – “A vehicle control method includes recognizing an object, generating a target trajectory of a vehicle, and automatically controlling driving of the vehicle on the basis of the target trajectory, calculating a region between a first virtual line, which passes through a reference point using the vehicle as a reference and a first point present in the vicinity of an outer edge of the object, and a second virtual line, which passes through the reference point and a second point present in the vicinity of the outer edge of the object, as a region through which the vehicle should avoid to travel” (equates to A system for generating a destination for an emergency response of an autonomous vehicle as the quote shows the vehicle detecting an object and thus an emergency is identified wherein a route is generated to avoid said emergency via a trajectory where a destination is contained within. Title shows the art’s ability to act as system. )) the system comprising: a memory configured to store computer-readable instructions; (Pg. 20 – Col. 6 – lines 32-34 – “The program may be previously stored in a storage device (a storage device including a non-transient storage medium) such as an HDD, a flash memory” (equates to the system comprising: a memory configured to store computer-readable instructions as the quote shows a program which is cited through the office action which can be contained in a memory. )) and at least one processor configured to execute the instructions, (Pg. 20 – Col. 6 – lines 23- 26 – “The first controller 120 and the second controller 160 are realized by executing a program (software) using a hardware processor such as a central processing unit (CPU),”) wherein the at least one processor is configured to execute the instructions to: generate forward perception information based on data collected from a sensor mounted on the autonomous vehicle; (Pg. 19 – Col. 1 – lines 25-27 – “M. For example, when a side in front of the host vehicle Mis imaged, the camera 10 is attached to an upper section of a front windshield” & See Also Pg. 19 – Col. 4 – lines 53-57 - “The object recognition device 16 recognizes a position, a type, a speed, or the like, of the object by performing sensor fusion processing with respect to the detection result by some or all of the camera 10, the radar device 12, and the LIDAR 14.” (equates to the method comprising: generating forward perception information based on data collected from a sensor mounted on the autonomous vehicle; as the quote shows the front side or forward direction of the vehicle being sensed via a camera and the forward perception information is generated via a sensor fusion result including the camera data as seen from the second quote.)) set a destination generation area based on the forward perception information; (Pg. 7 – Fig. 8 & See Also Pg. 8 – Fig. 9 & See Also Pg. 9 – Fig. 10 & See Also Pg. 21 – Col. 8 – lines 40-43 – “The risk region calculating part 144 calculates a risk region potentially distributed or present around the object recognized by the recognition part 130 (hereinafter, referred to as a risk region RA).” & See Also Pg. 21 – col. 7 – lines 2-4 – “input from the camera 10, the radar device 12, and the LIDAR 14 via the object recognition device 16. The object recognized by the recognition part 130 includes,” & See Also Pg. 23 – Col. 12 – lines 25-27 – “trajectory generating part 146 inputs a vector or a tensor representing the risk region RA to each of the plurality of DNN models MDL” & See Also Pg. 23 – col. 12 – lines 36-37 – “is a view showing an example of the target trajectory TR output from a certain DNN model MDLl” (equates to setting a destination generation area based on the forward perception information as the figures and combination of quotes show how the previously mapped forward perception information is used to generate a risk area and by inputting a risk area in a deep neural network one can extrapolate the trajectory needed for a vehicle is mitigate any obstacle detected and thus seen by fig. 8 and the trajectory generated a destination are ais generated to be able to move the vehicle through the risk area. )) generate a candidate destination in the destination generation area based on the forward perception information (Pg. 7 – Fig. 8 & See Also Pg. 8 – Fig. 9 & See Also Pg. 9 – Fig. 10 & See Also Pg. 21 – Col. 8 – lines 40-43 – “The risk region calculating part 144 calculates a risk region potentially distributed or present around the object recognized by the recognition part 130 (hereinafter, referred to as a risk region RA).” & See Also Pg. 21 – col. 7 – lines 2-4 – “input from the camera 10, the radar device 12, and the LIDAR 14 via the object recognition device 16. The object recognized by the recognition part 130 includes,” & See Also Pg. 23 – Col. 12 – lines 25-27 – “trajectory generating part 146 inputs a vector or a tensor representing the risk region RA to each of the plurality of DNN models MDL” & See Also Pg. 23 – col. 12 – lines 36-37 – “is a view showing an example of the target trajectory TR output from a certain DNN model MDLl” (equates to generating a candidate destination in the destination generation area based on the forward perception information and a current heading range of the autonomous vehicle; as the figure 10 and the last quote show how a trajectory is generated via a risk area assessment and thus a candidate destination is generated based on the forward perception. )) and when the candidate destination is provided as a plurality of candidate destinations, (Pg. 13 – Fig. 14 & See Also Pg. 25 – Col. 16 – lines 1-3 – “That is, as shown, the total four target trajectories TR referred to as TRl, TR2, TR3 and TR4 are generated.” (equates to and when the candidate destination is provided as a plurality of candidate destinations as the quote and figure shows trajectories being generated that have different candidate destinations.)) select one destination from among the candidate destinations based on a maximum vertical movement distance of each of the candidate destinations. (Pg. 4 – fig. 3 & See Also Pg. 6 – Fig. 7 & See Also Pg. 13 – Fig. 14 & See Also Pg. 22 – Col. 10 – lines 10 – 18 – “showing a variation in the risk potential p in the X direction at a certain coordinate y4. The coordinate y4 is intermediate coordinates between yl and y2, and the preceding vehicle ml is present at the coordinate y4. For this reason, the risk potential p is highest at the coordinates 15 (x3, y4), the risk potential p at the coordinates (x2, y4) farther from the preceding vehicle ml than the coordinates (x3, y4) is lower than the risk potential at the coordinates (x3, y4),” & See Also Pg. 23 – Col. 12 – lines 19-22 – “generates one or a plurality of target trajectories TR on the basis of the output result of the DNN models MDLl to which the risk region RA is input.” & See Also Pg. 10 – Fig. 11 – s110 – “SELECT OPTIMAL TARGET TRAJECTORY FROM REMAINING TARGET TRAJECTORIES” (equates to selecting one destination from among the candidate destinations based on a maximum vertical movement distance of each of the candidate destinations. As the figures shows the maximum vertical distance the vehicle would travel within a destination area and then assigns risk value to each of the locations. The trajectory that is then selected is based on the maximum vertical distance the vehicle travels within the destination area and it’s associated risk value.))
Yet Kumano fails to teach information and a current heading range of the autonomous vehicle.
Floor teaches information and a current heading range of the autonomous vehicle; (Pg. 24 – [0093] – “constrains a vehicle heading to within a predetermined range of headings with respect to a given point on the reference path 704” ) It would have been an advantageous addition to the method disclosed by Kumano to include and a current heading range of the autonomous vehicle as this allows for a heading and specifically a range of headings to be considered in generating a destination and thus ensures an actual destination can be reached by a vehicle based on the prescribed range.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to include and a current heading range of the autonomous vehicle as this ensures the destination being generated is in line with a comfortable steering maneuver as prescribed by the heading range allowing for a safe feeling to be had by the driver and passengers of the vehicle.
Regarding Claim 8 Kumano- Floor teaches (Kumano discloses the following limitations: )The system of claim 6, wherein the at least one processor perceives an object in front of the autonomous vehicle based on the forward perception information, (Pg. 27 – col. 19 – lines 22-24 – “A vehicle control device is configured to include… at least one processor…” & See Also Pg. 21 – Col. 7 – lines 2-6 – “input from the camera 10, the radar device 12, and the LIDAR 14 via the object recognition device 16. The object recognized by the recognition part 130 includes, for example, a bicycle, an motorcycle, a four-wheeled automobile, a pedestrian” & See Also Pg. 19 – Col. 1 – lines 25-27 – “M. For example, when a side in front of the host vehicle Mis imaged, the camera 10 is attached to an upper section of a front windshield” (equates to wherein the at least one processor perceives an object in front of the autonomous vehicle based on the forward perception information as the first quote shows the device of the cited art containing a processor in which the recognition part 130 does the perceiving from the forward perception data of the camara and sensor fusion result.)) determines a location and movement direction of the object, (Pg. 19 – Col. 4 – lines 37-45 – “The radar device 12 radiates radio waves such as millimeter waves or the like to surroundings of the host vehicle M, and simultaneously, detects the radio waves (reflected 40 waves) reflected by the object to detect a position (a distance and an azimuth) of at least the object. The radar device 12 is attached to an arbitrary place of the host vehicle M. The radar device 12 may detect a position and a speed of the object using a frequency modulated continuous wave (FM- 45 CW) method” (equates to and determining a location and movement direction of the object as the radar is seen to detect location and direction of the object, as well as, the speed of the object thus a movement direction is attained via the speed and direction detected. )) sets a risk area based on the location and movement direction of the object, and excludes the risk area from the destination generation area. . (Pg. 7 – Fig. 8 & See Also Pg. 8 – Fig. 9 & See Also Pg. 9 – Fig. 10 & See Also Pg. 21 – Col. 8 – lines 40-43 – “The risk region calculating part 144 calculates a risk region potentially distributed or present around the object recognized by the recognition part 130 (hereinafter, referred to as a risk region RA).” & See Also Pg. 21 – col. 7 – lines 2-4 – “input from the camera 10, the radar device 12, and the LIDAR 14 via the object recognition device 16. The object recognized by the recognition part 130 includes,” & See Also Pg. 23 – Col. 12 – lines 25-27 – “trajectory generating part 146 inputs a vector or a tensor representing the risk region RA to each of the plurality of DNN models MDL” & See Also Pg. 23 – col. 12 – lines 36-37 – “is a view showing an example of the target trajectory TR output from a certain DNN model MDLl” (equates sets a risk area based on the location and movement direction of the object, and excludes the risk area from the destination generation area. as the first quote shows the incorporation of the radar data which is previously mapped to the location and movement direction information that is gathered. This information is used for a risk assessment for the vehicle and a trajectory is generated that is minimizing the risk and thus excluding the risk from the trajectory calculation. ))
Regarding Claim 9 Kumano- Floor teaches (Kumano discloses the following:) The system of claim 8, wherein the at least one processor excludes a point that is reached only after passing through the risk area from the candidate destination. (Pg. 13 – fig. 14 & See Also Pg. 25 – Col. 16 – lines 35 – 41 - “FIG. 14 is a view showing an example of the excluded target trajectory TR. In the example shown, in the four target trajectories TR, TRl is present inside the traveling avoidance region AAl and TR4 is present inside the traveling avoidance region AA3. In this case, the target trajectory generating part 146 excludes the target trajectory TRl and TR4.” (equates to in the generating of the candidate destination, a point that is reached only after passing through the risk area is excluded from the candidate destination as the quote shows the trajectories tr1 and tr4 going through a risk zone and these trajectories are excluded based on the fact the host vehicle would hit these surrounding vehicles if the trajectories were taken and thus the candidate destination that would be attained via taking either two trajectories is cancelled based on the deemed risk area.) )
Claims 2 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Kumano- Floor as previously mapped and in further view of Park (KR102841665B)
Regarding Claim 2 Kumano- Floor teaches (Kumano teaches the following limitations: ) The method of claim 1, from the forward perception information. (Pg. 19 – Col. 1 – lines 25-27 – “M. For example, when a side in front of the host vehicle Mis imaged, the camera 10 is attached to an upper section of a front windshield” & See Also Pg. 19 – Col. 4 – lines 53-57 - “The object recognition device 16 recognizes a position, a type, a speed, or the like, of the object by performing sensor fusion processing with respect to the detection result by some or all of the camera 10, the radar device 12, and the LIDAR 14.”)
Yet Kumano- Floor fails to teach further comprising excluding lane information.
Park teaches further comprising excluding lane information (Pg. 20 – [0152] – “The topology data may be understood as data about road information from which information about a lane is excluded”). It would have been an advantageous addition to the method disclosed by Kumano- Floor to include further comprising excluding lane information as this allows the data to be taken in to solely rely upon the detection of objects and does not worry about the path the vehicle has to take in respect to the lane markers as the risk is avoided by simply detecting and maneuvering around it based on the destination decided.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to include further comprising excluding lane information as this ensures the vehicle is only worried about traveling around a detected risk and not worrying about driving within a lane to do so, thus allowing more options to avoid a designated risk area.
Regarding Claim 7 Kumano-Floor teaches (Kumano teaches the following limitations: ) The system of claim 6, wherein the at least one processor (Pg. 20 – Col. 6 – lines 24 -25 – “…are realized by executing a program (software) using a hardware processor…”) from the forward perception information. (Pg. 19 – Col. 1 – lines 25-27 – “M. For example, when a side in front of the host vehicle Mis imaged, the camera 10 is attached to an upper section of a front windshield” & See Also Pg. 19 – Col. 4 – lines 53-57 - “The object recognition device 16 recognizes a position, a type, a speed, or the like, of the object by performing sensor fusion processing with respect to the detection result by some or all of the camera 10, the radar device 12, and the LIDAR 14.”)
Yet Kumano-Floor fails to teach excludes lane information
Park teaches excludes lane information (Pg. 20 – [0152] – “The topology data may be understood as data about road information from which information about a lane is excluded”). It would have been an advantageous addition to the method disclosed by Kumano- Floor to include excludes lane information as this allows the data to be taken in to solely rely upon the detection of objects and does not worry about the path the vehicle has to take in respect to the lane markers as the risk is avoided by simply detecting and maneuvering around it based on the destination decided.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to include excludes lane information as this ensures the vehicle is only worried about traveling around a detected risk and not worrying about driving within a lane to do so, thus allowing more options to avoid a designated risk area.
Claims 5 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Kumano- Floor as previously mapped and in further view of SHARMA BANJADE (US 2022/0388505 Al)
Regarding Claim 5 Kumano- Floor teaches (Kumano teaches the following limitations: ) The method of claim 1, wherein, in the selecting of the destination, (Pg. 10 – Fig. 11 – s110 “SELECT OPTIMAL TARGET TRAJECTORY FROM REMAINING TARGET TRAJECTORIES” & See Also Pg. 25 – Col. 16 – lines 49-50 – “select the target trajectory TR with a higher evaluation as an optimal target trajectory TR” (equates to wherein, in the selecting of the destination, as the quote shows the trajectory being selected wherein a trajectory has a destination based on the path it specifies.)) when the candidate destination is provided as the plurality of candidate destinations, (Pg. 13 – Fig. 14 & See Also Pg. 25 – Col. 16 – lines 1-3 – “That is, as shown, the total four target trajectories TR referred to as TRl, TR2, TR3 and TR4 are generated.” (equates to and when the candidate destination is provided as a plurality of candidate destinations as the quote and figure shows trajectories being generated that have different candidate destinations.)) one destination is selected from among the candidate destinations (Pg. 10 – Fig. 11 – s110 “SELECT OPTIMAL TARGET TRAJECTORY FROM REMAINING TARGET TRAJECTORIES” & See Also Pg. 25 – Col. 16 – lines 49-50 – “select the target trajectory TR with a higher evaluation as an optimal target trajectory TR”)
Yet Kumano-Floor fails to teach one destination is selected from among the candidate destinations based on a heading value and the maximum vertical movement distance maintained to reach each of the candidate destinations.
SHARMA BANJADE teaches based on a heading value and the maximum vertical movement distance maintained to reach each of the candidate destinations. (Pg. 14 – Fig. 12 & See Also Pg. 67 – [0362] – “For example, the use of the device (e.g., VRU device 117) navigation system, which provides assistance to the user (e.g., VRU 116) to select the best trajectory for reaching its planned destination” & See Also Pg. 41 – [0060] - “The LoD is the estimated distance of the VRU 116 from the ego-vehicle and VRU 116 along the direction of heading as shown by scenario 400a. The MSLoD is the minimum longitudinal separation of the VRU 116 from the ego-V-ITS-S 110 and VRU 116 for considered to be safe.” & See Also Pg. 42 – [0077] – “The dead reckoning module 822 is configurable or operable to determine or estimate the VRU 116 position, location, speed, heading/angular-direction (approach…” (equates to based on a heading value and the maximum vertical movement distance maintained to reach each of the candidate destinations as the first quote shows the unit 116 being able to calculate a trajectory of the vehicle and the following quote shows the lateral distance between the vehicle and the risk assessed area to be defined by a longitudinal distance or a maximum vertical movement distance is maintained. And the last quote shows how the heading of the vehicle is calculated and thus both the heading and vertical distance calculated by the unit 116 can be used in the trajectory calculation or destination determination of the same unit. )) It would have been an advantageous addition to the system disclosed by Kumano-Floor to include based on a heading value and the maximum vertical movement distance maintained to reach each of the candidate destinations as this limitation allows for additional vehicle metrics between itself and risk deemed area to be considered allowing for a safer destination to be considered as it is based on real time data of vehicular travel.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to include based on a heading value and the maximum vertical movement distance maintained to reach each of the candidate destinations as more real time data of the vehicle allow for a more data driven approach to be considered in risk mitigation.
Regarding Claim 10 Kumano-Floor teaches (Kumano teaches the following limitations: ) The system of claim 6, wherein, when there are the plurality of candidate destinations, (Pg. 13 – Fig. 14 & See Also Pg. 25 – Col. 16 – lines 1-3 – “That is, as shown, the total four target trajectories TR referred to as TRl, TR2, TR3 and TR4 are generated.” (equates to and when the candidate destination is provided as a plurality of candidate destinations as the quote and figure shows trajectories being generated that have different candidate destinations.)) the at least one processor selects one destination from among the candidate destinations (Pg. 10 – Fig. 11 – s110 “SELECT OPTIMAL TARGET TRAJECTORY FROM REMAINING TARGET TRAJECTORIES” & See Also Pg. 25 – Col. 16 – lines 49-50 – “select the target trajectory TR with a higher evaluation as an optimal target trajectory TR”)
Yet Kumano-Floor fails to teach based on the maximum vertical movement distance and a heading value maintained to reach each of the candidate destinations.
SHARMA BANJADE teaches based on the maximum vertical movement distance and a heading value maintained to reach each of the candidate destinations. (Pg. 14 – Fig. 12 & See Also Pg. 67 – [0362] – “For example, the use of the device (e.g., VRU device 117) navigation system, which provides assistance to the user (e.g., VRU 116) to select the best trajectory for reaching its planned destination” & See Also Pg. 41 – [0060] - “The LoD is the estimated distance of the VRU 116 from the ego-vehicle and VRU 116 along the direction of heading as shown by scenario 400a. The MSLoD is the minimum longitudinal separation of the VRU 116 from the ego-V-ITS-S 110 and VRU 116 for considered to be safe.” & See Also Pg. 42 – [0077] – “The dead reckoning module 822 is configurable or operable to determine or estimate the VRU 116 position, location, speed, heading/angular-direction (approach…” (equates to based on a heading value and the maximum vertical movement distance maintained to reach each of the candidate destinations as the first quote shows the unit 116 being able to calculate a trajectory of the vehicle and the following quote shows the lateral distance between the vehicle and the risk assessed area to be defined by a longitudinal distance or a maximum vertical movement distance is maintained. And the last quote shows how the heading of the vehicle is calculated and thus both the heading and vertical distance calculated by the unit 116 can be used in the trajectory calculation or destination determination of the same unit. )) It would have been an advantageous addition to the system disclosed by Kumano-Floor to include based on a heading value and the maximum vertical movement distance maintained to reach each of the candidate destinations as this limitation allows for additional vehicle metrics between itself and risk deemed area to be considered allowing for a safer destination to be considered as it is based on real time data of vehicular travel.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to include based on a heading value and the maximum vertical movement distance maintained to reach each of the candidate destinations as more real time data of the vehicle allow for a more data driven approach to be considered in risk mitigation.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kumano- Floor as previously mapped and in further view of Fausten (DE102023200871A1)
Regarding Claim 11 Kumano teaches An autonomous driving system for controlling an autonomous vehicle, (Pg. 19 – Col. 4 – line 5 – “The vehicle system 1 includes…” & See Also Pg. 19 – Col. 3 – lines 48-49 – “The vehicle control device of the embodiment is applied to, for example, an automatic traveling vehicle” & See Also (equates to An autonomous driving sys