Detailed Office Action
Status of Claims
This Office Action is in response to the Applicant’s amendments and remarks filed 11/03/2025. The applicant has amended claims 1, 6-8, and 19. Applicant has cancelled claims 12-18. Applicant has newly added claims 21-23. Claims 1-11 and 19-23 are presently pending and are presented for examination.
Response to Amendment
The amendment filed 11/03/2025 has been entered. Claims 1-11 and 19-23 remain pending in the application.
Reply to Applicant’s Remarks
Applicant’s remarks filed 11/03/2025 have been fully considered and are addressed as follows:
Claim Rejections Under 35 U.S.C. 101:
Applicant’s amendments to the claims filed 11/03/2025 have not overcome the 35 U.S.C 101 rejections previously set forth. Regarding the Applicant’s argument that “sensing, by a camera, visual data with respect to a surface of the road, transmitting, by the end computing device, to the roadside computing device, a mode change request, transmitting, by the roadside computing device, a notification to multiple end computing devices in the proximity of the roadside computing device, and inputting the visual data to a machine learning (ML) model are not mental processes and cannot be carried out mentally”, the Examiner respectfully disagrees. While the limitations recite the method being carried out by a camera, end computing device, and roadside computing device, the act of sensing road surface data can be practically performed in the human mind. It is important to note that “Claims can recite a mental process even if they are claimed as being performed on a computer” (See at least MPEP 2106.04(a)(2)(III)(C)). Further, the acts of transmitting and inputting are merely the sending and receiving of data, which are insignificant extra solution activities.
Therefore, because the claims only recite mental processes and insignificant extra solution activities, there are no additional elements that can integrate the abstract idea into a practical application. Further, the claim cannot provide an improvement to the technology as an improved abstract idea is still an abstract idea. (see MPEP 2106.05(a) Section II, “However, it is important to keep in mind that an improvement in the abstract idea…is not an improvement in technology”).
Please see detailed rejection below.
Claim Rejections Under 35 U.S.C. 103:
Applicant’s arguments, see Arguments/Remarks, filed 11/03/2025, with regard to the rejections of Claims 1 and 19 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of newly found prior art reference(s).
Claim Objections
Claims 1 and 19 are objected to because of the following informalities: The limitation “the camera is installed associated with the end computing device” appears to contain a minor informality potentially due to a missing word between “installed” and “associated”. As currently drafted the limitations remains difficult to read and understand. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-11 and 29-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims’ subject matter eligibility will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”).
101 Analysis - With respect to Claim 1
Claims 1 and 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis - Step 1:
Claim 1 is directed towards a method which is directed to the statutory category of a process. Claim 19 is directed towards a system which is directed to the statutory category of a machine. Therefore Claims 1 and 19 are within at least one of the four statutory categories.
101 Analysis- Step 2A Prong One:
Regarding Prong One of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental process.
Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection.
Claim 1 recites, inter alai:
“A method of detecting road anomalies, the method comprising:
by a camera included in an edge-computing-based system, sensing visual data with respect to a surface of the road, wherein the system comprises the camera, an end computing device installed on a vehicle travelling on a road, and a roadside computing device installed on the road, the system is configured to perform road anomaly detection in either a first mode in which the end computing device identifies road anomalies or a second mode in which the roadside computing device identifies road anomalies, the camera is installed associated with the end computing device, and the visual data is a live feed captured using the camera
upon a first instruction received by the roadside computing device, determining that the system performs road anomaly detection in the first mode;
upon a first criterion being met, transmitting, by the end computing device, to the roadside computing device, a mode change request to switch the system from the first mode to the second mode;
upon a second criterion being met, transmitting, by the end computing device, to the roadside computing device, a mode change request to switch the system from the second mode to the first mode; and
transmitting, by the roadside computing device, a notification to multiple end computing devices in the proximity of the roadside computing device, wherein the notification is an anomaly-specific notification that includes at least one of a message indicating a presence of the road anomaly, an image of the road anomaly, a location of the road anomaly. or a severity of the road anomaly
wherein depending on whether the system performs road anomaly detection in the first mode or the second mode, either the end computing device or the roadside computing device identifies road anomalies by:
inputting the visual data to a machine learning (ML) model trained to detect and classify a road anomaly, wherein the road anomaly includes a pothole, longitudinal cracks, transverse cracks, or alligator cracks on the road
determining multiple features of the detected road anomaly, wherein the features include a size of the road anomaly
assessing a severity of the road anomaly based on the features”
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind.
For example, “sensing”, “detecting”, “determining” and “assessing” in the context of this claim, all encompass a person looking at available data and forming a simple judgement (determination, analysis, comparison, etc.) either manually or using a pen and paper. Accordingly, the claim recites at least one abstract idea. The examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
As drafted, the above claims, under their broadest reasonable interpretation, cover mental processes performed in the human mind (including an observation, evaluation, judgement, opinion), that are merely completed via generic computer components. Accordingly, the claims recite an abstract idea.
Step 2A Prong Two Analysis:
Regarding Prong Two of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”.
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
Claim 1 recites, inter alai:
“A method of detecting road anomalies, the method comprising:
by a camera included in an edge-computing-based system, sensing visual data with respect to a surface of the road, wherein the system comprises the camera, an end computing device installed on a vehicle travelling on a road, and a roadside computing device installed on the road, the system is configured to perform road anomaly detection in either a first mode in which the end computing device identifies road anomalies or a second mode in which the roadside computing device identifies road anomalies, the camera is installed associated with the end computing device, and the visual data is a live feed captured using the camera
upon a first instruction received by the roadside computing device, determining that the system performs road anomaly detection in the first mode;
upon a first criterion being met, transmitting, by the end computing device, to the roadside computing device, a mode change request to switch the system from the first mode to the second mode;
upon a second criterion being met, transmitting, by the end computing device, to the roadside computing device, a mode change request to switch the system from the second mode to the first mode; and
transmitting, by the roadside computing device, a notification to multiple end computing devices in the proximity of the roadside computing device, wherein the notification is an anomaly-specific notification that includes at least one of a message indicating a presence of the road anomaly, an image of the road anomaly, a location of the road anomaly. or a severity of the road anomaly
wherein depending on whether the system performs road anomaly detection in the first mode or the second mode, either the end computing device or the roadside computing device identifies road anomalies by:
inputting the visual data to a machine learning (ML) model trained to detect and classify a road anomaly, wherein the road anomaly includes a pothole, longitudinal cracks, transverse cracks, or alligator cracks on the road
determining multiple features of the detected road anomaly, wherein the features include a size of the road anomaly
assessing a severity of the road anomaly based on the features”
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitation of “an end computing device…”, “a camera installed…”, “a roadside computing device…”, these limitation merely describes how to generally “apply” the otherwise mental judgements in a generic or general purpose vehicle control environment. See Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). The device(s) and processor(s) are recited at a high level of generality and merely automates the steps.
Regarding the additional limitations of “transmitting…” and “inputting…”, these limitation merely describe the sending and receiving of data which is in insignificant extra solution activity. See MPEP § 2106.05(g).
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Step 2B Analysis:
The claims do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using generic computer components to perform the abstract idea amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the act of transmitting data and inputting data amounts to no more than merely sending information of the exception and thus is an extra-solution activity. The claims are not patent eligible.
Regarding dependent claims 2-11 and 20-23, no claim further adds a limitation that introduces any practical applications to the claimed invention, the dependent claims merely add more mental process, mathematical concepts, and post-solution activities and are thus not patent eligible.
Therefore, Claims 1-11 and 19-23 are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 8-11, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Higuchi et al (US 20220053308 A1) in view of Young et al (US 20240200974 A1). Hereafter referred to as Higuchi and Young respectively.
Regarding Claim 1, Higuchi teaches a method of detecting road anomalies (see at least Higuchi [¶ 195] The sensor data 195 includes digital data that describes images or other measurements of the physical environment such as the conditions, objects, and other vehicles present in the roadway environment. Examples of objects include pedestrians, animals, traffic signs, traffic lights, potholes, etc)
the method comprising:
by a camera included in an edge-computing-based system, sensing visual data with respect to a surface of the road (see at least Higuchi [¶ 24, 192, 50] the perception system provide numerous benefits including, among other things, solving the variable computational ability problem by providing functionality which beneficially enables vehicles to offload responsibility for executing an environmental perception analysis to another computing entity (e.g., an edge server, another vehicle, or a vehicular micro cloud) ...the sensor set 126 may include one or more sensors that are operable to measure the physical environment outside of the ego vehicle 123. For example, the sensor set 126 may include cameras, lidar, radar, sonar and other sensors that record one or more physical characteristics of the physical environment that is proximate to the ego vehicle 123.... Examples of objects include one or of the following: other automobiles, road surfaces)
wherein the system comprises the camera, an end computing device installed on a vehicle travelling on a road, and a roadside computing device installed on the road (see at least Higuchi [¶ 192, 16, 49, 241] the sensor set 126 may include one or more sensors that are operable to measure the physical environment outside of the ego vehicle 123… the sensor set 126 may include cameras, lidar, radar, sonar and other sensors that record one or more physical characteristics of the physical environment that is proximate to the ego vehicle 123...An automated driving system includes a sufficient number of ADAS systems so that the vehicle which includes these ADAS systems is rendered autonomous by the benefit of the functionality received by the operation of the ADAS systems by a processor of the vehicle...The roadway environment may include one or more of the following example elements: an ego vehicle; N remote vehicles; an edge server; and a roadside unit....the ego vehicle 123, the remote vehicle, and the roadway device 151 are located in a roadway environment 140. The roadway environment is a portion of the real-world that includes a roadway)
the system is configured to perform road anomaly detection in either a first mode in which the end computing device identifies road anomalies or a second mode in which the roadside computing device identifies road anomalies (see at least Higuchi [¶ 156-158] Step 4: The task schedule determines where to execute the environmental perception analysis. In some embodiments, if the ego vehicle is selected at step 4, then the ego vehicle executes the environmental perception analysis. In some embodiments, if the edge server is selected at step 4, then the ego vehicle transmits the sensor data to the edge server and the edge server executes the environmental perception analysis and sends the analysis data to the ego vehicle and other vehicles in the vicinity)
the camera is installed associated with the end computing device, and the visual data is a live feed captured using the camera (see at least Higuchi [¶ 197] the sensor data 195 includes, among other things, one or more of the following: lidar data (i.e., depth information) recorded by an ego vehicle; or camera data (i.e., image information) recorded by the ego vehicle)
upon a first instruction received by the roadside computing device, determining that the system performs road anomaly detection in the first mode; upon a first criterion being met, transmitting, by the end computing device, to the roadside computing device, a mode change request to switch the system from the first mode to the second mode; upon a second criterion being met, transmitting, by the end computing device, to the roadside computing device, a mode change request to switch the system from the second mode to the first mode (see at least Higuchi [¶ 130, 156-158] Determine that the ego vehicle should offload responsibility for executing an environmental perception analysis to another computing entity (i.e., a “selected computing entity”)…this determination is made based on whether the ego vehicle has a clear sensor field of view and how the performance data and the network data compare to the conditions data…Step 4: The task schedule determines where to execute the environmental perception analysis. In some embodiments, if the ego vehicle is selected at step 4, then the ego vehicle executes the environmental perception analysis. In some embodiments, if the edge server is selected at step 4, then the ego vehicle transmits the sensor data to the edge server and the edge server executes the environmental perception analysis and sends the analysis data to the ego vehicle and other vehicles in the vicinity) The invention disclosed in Higuchi discusses a determination for whether the ego vehicle should process environmental data itself or offload it to an edge server based on criteria such as the sensor’s view condition and performance data, this is analogous two operating in two modes based on at least two types of criteria
transmitting, by the roadside computing device, a notification to multiple end computing devices in the proximity of the roadside computing device, wherein the notification is an anomaly-specific notification that includes at least one of a message indicating a presence of the road anomaly, an image of the road anomaly, a location of the road anomaly. or a severity of the road anomaly (see at least Higuchi [¶ 158, 50, 196, 210] if the edge server is selected at step 4, then the ego vehicle transmits the sensor data to the edge server and the edge server executes the environmental perception analysis and sends the analysis data to the ego vehicle and other vehicles in the vicinity...the roadway environment 140 includes objects...The physical environment may include a roadway region, parking lot, or parking garage that is proximate to the ego vehicle 123. The sensor data may describe measurable aspects of the physical environment...the perception system 199 include code and routines that are operable, when executed by the processor 125, to cause the processor to: analyze (1) GPS data describing the geographic location of the ego vehicle 123 and (2) sensor data describing the range separating the ego vehicle 123 from an object and a heading for this range; and determine, based on this analysis, GPS data describing the location of the object).
However, Higuchi does not explicitly teach wherein depending on whether the system performs road anomaly detection in the first mode or the second mode, either the end computing device or the roadside computing device identifies road anomalies by:
inputting the visual data to a machine learning (ML) model trained to detect and classify a road anomaly, wherein the road anomaly includes a pothole, longitudinal cracks, transverse cracks, or alligator cracks on the road
determining multiple features of the detected road anomaly, wherein the features include a size of the road anomaly
assessing a severity of the road anomaly based on the features.
Young, in the same field as the endeavor, teaches wherein depending on whether the system performs road anomaly detection in the first mode or the second mode, either the end computing device or the roadside computing device identifies road anomalies by:
inputting the visual data to a machine learning (ML) model trained to detect and classify a road anomaly, wherein the road anomaly includes a pothole, longitudinal cracks, transverse cracks, or alligator cracks on the road (see at least Young [¶ 6, 30] causing the apparatus to identify, within the data, the visual indication of the road surface anomaly includes causing the apparatus to: process data from the image sensor using a machine learning model, and identify, within the data from the image sensor, the visual indication of the road surface anomaly using the machine learning model.....the collection of image data from sensors of a vehicle and to extract relevant images indicative of road surface anomalies for localization, map building, and vehicle control…The irregularities may vary by lane and even position within a lane, such as a pothole, a crack, bump (e.g., road upheave), or may be consistent within an entire road segment, such as a seam that reaches across all lanes of a road)
determining multiple features of the detected road anomaly, wherein the features include a size of the road anomaly (see at least Young [¶ 30-31, 55] The irregularities may vary by lane and even position within a lane, such as a pothole, a crack, bump…road surface anomaly identification and storage may include the creation of road surface anomaly features from images and geo-referencing them to a map…the anomaly may be relatively severe…FIG. 4 illustrates a road segment 300 with an anomaly 302 in the form of a dip in the road surface. An oil track 304 is formed past the dip in the direction of travel 310, and a second oil track 306 is also formed. This second, smaller oil track 306 is a result of vehicles bouncing after the anomaly 302. This can be caused by a larger anomaly)
assessing a severity of the road anomaly based on the features (see at least Young [¶ 66] The presence of road surface anomalies can be provided to municipalities or other entity responsible for road maintenance. The severity of the road surface anomaly can be identified through visual analysis).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for inputting the visual data to a machine learning (ML) model trained to detect and classify a road anomaly, determining multiple features of the detected road anomaly, wherein the features include a size of the road anomaly, and wherein the road anomaly includes a pothole, longitudinal cracks, transverse cracks, or alligator cracks on the road, and assessing a severity of the road anomaly based on the features with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving safety of the travelling vehicle as discussed in Young [¶ 26] Autonomous vehicles leverage sensor information relating to roads to determine safe regions of a road to drive and to evaluate their surroundings as they traverse a road segment).
Regarding Claim 8, Higuchi in view of Young teaches all limitations of Claim 1 as set forth above. Higuchi further teaches storing road anomaly data in an electronic storage device wherein the road anomaly data includes at least one of an image of the road anomaly, a type of the road anomaly, a location of the road anomaly, or a severity of the road anomaly (see at least Higuchi [¶ 38, 223, 196] a data storage task includes a processor storing digital data in a memory of a connected vehicle…The memory 127 may include a non-transitory storage medium. The memory 127 may store instructions or data that may be executed by the processor 125…any other tangible object that is present in the real-world and proximate to the ego vehicle 123 or otherwise measurable by the sensors of the sensor set 126 or whose presence is determinable from the digital data stored on the memory 127).
Regarding Claim 9, Higuchi in view of Young teaches all limitations of Claim 1 as set forth above. However, Higuchi does not explicitly teach obtaining route data from a first end computing device of the end computing devices, wherein the route data is indicative of a route to be travelled by a first vehicle associated with the first end computing device
comparing the route data with road anomaly data stored in an electronic storage device to determine a set of road anomalies along the route of the first vehicle
transmitting information regarding the set of road anomalies to the first end computing device.
Young, in the same field as the endeavor, teaches obtaining route data from a first end computing device of the end computing devices, wherein the route data is indicative of a route to be travelled by a first vehicle associated with the first end computing device (see at least Young [¶ 40, 46] The apparatus 20 may support a mapping or navigation application so as to present maps or otherwise provide navigation or driver assistance...geographic data may be compiled (such as into a platform specification format (PSF) format) to organize and/or configure the data for performing navigation-related functions and/or services, such as route calculation, route guidance)
comparing the route data with road anomaly data stored in an electronic storage device to determine a set of road anomalies along the route of the first vehicle (see at least Young [¶ 67] Road surface anomalies in map data can optionally be used to provide route guidance to avoid such anomalies. An entire road may be avoided if feasible for a route if the road surface anomalies are sufficiently bad. Optionally, lane level guidance can be provided to a driver or an autonomous vehicle to avoid road surface anomalies. Certain vehicles may wish to avoid any road surface anomaly that is above a predetermined threshold of severity)
transmitting information regarding the set of road anomalies to the first end computing device (see at least Young [¶ 10] receiving data from an image sensor associated with a vehicle traveling along a road segment; identifying, within the data, a visual indication of a road surface anomaly, where the visual indication is a result of the road surface anomaly; and causing at least one of: an indication of the road surface anomaly to be provided to a user of the vehicle).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for obtaining route data from a first end computing device of the end computing devices, wherein the route data is indicative of a route to be travelled by a first vehicle associated with the first end computing device, comparing the route data with road anomaly data stored in an electronic storage device to determine a set of road anomalies along the route of the first vehicle, transmitting information regarding the set of road anomalies to the first end computing device with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the safety of the travelling of the road vehicle along a particular route as discussed in Young (see at least Young [67] Road surface anomalies in map data can optionally be used to provide route guidance to avoid such anomalies. An entire road may be avoided if feasible for a route if the road surface anomalies are sufficiently bad).
Regarding Claim 10, Higuchi in view of Young teaches all limitations of Claim 1 as set forth above. However, Higuchi does not explicitly teach wherein transmitting the notification comprises: transmitting the notification based on the severity of the road anomaly matching a specified criterion.
Young, in the same field as the endeavor, teaches wherein transmitting the notification comprises: transmitting the notification based on the severity of the road anomaly matching a specified criterion (see at least Young [¶ 38, 67] the apparatus 20 may use road surface anomaly information to present information to a user via the user interface 28 such as a warning regarding a pothole or the like so the user can take any necessary actions…Road surface anomalies in map data can optionally be used to provide route guidance to avoid such anomalies. An entire road may be avoided if feasible for a route if the road surface anomalies are sufficiently bad…Certain vehicles may wish to avoid any road surface anomaly that is above a predetermined threshold of severity).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for wherein transmitting the notification comprises: transmitting the notification based on the severity of the road anomaly matching a specified criterion with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the safety of the travelling of the road vehicle along a particular route as discussed in Young (see at least Young [67] Road surface anomalies in map data can optionally be used to provide route guidance to avoid such anomalies. An entire road may be avoided if feasible for a route if the road surface anomalies are sufficiently bad).
Regarding Claim 11, Higuchi in view of Young teaches all limitations of Claim 1 as set forth above. Higuchi further teaches wherein determining the features include determining a depth of the road anomaly (see at least Higuchi [¶ 196-197] The sensor data may describe measurable aspects of the physical environment. In some embodiments, the physical environment is the roadway environment 140. As such, in some embodiments, the roadway environment 140 includes one or more of the following:… the objects present in the physical environment proximate to the ego vehicle 123…the sensor data 195 includes, among other things, one or more of the following: lidar data (i.e., depth information) recorded by an ego vehicle).
Regarding Claim 19, Higuchi teaches an edge based computing system for detecting road anomalies (see at least Higuchi [¶ 195, 24, 192] The sensor data 195 includes digital data that describes images or other measurements of the physical environment such as the conditions, objects, and other vehicles present in the roadway environment. Examples of objects include pedestrians, animals, traffic signs, traffic lights, potholes, etc…the perception system provide numerous benefits including, among other things, solving the variable computational ability problem by providing functionality which beneficially enables vehicles to offload responsibility for executing an environmental perception analysis to another computing entity (e.g., an edge server, another vehicle, or a vehicular micro cloud)
comprising:
an end computing device installed on a vehicle travelling on a road; a camera installed associated with the end computing device; and a roadside computing device installed on the road (see at least Higuchi [¶ 192, 16, 49, 241] the sensor set 126 may include one or more sensors that are operable to measure the physical environment outside of the ego vehicle 123… the sensor set 126 may include cameras, lidar, radar, sonar and other sensors that record one or more physical characteristics of the physical environment that is proximate to the ego vehicle 123...An automated driving system includes a sufficient number of ADAS systems so that the vehicle which includes these ADAS systems is rendered autonomous by the benefit of the functionality received by the operation of the ADAS systems by a processor of the vehicle...The roadway environment may include one or more of the following example elements: an ego vehicle; N remote vehicles; an edge server; and a roadside unit....the ego vehicle 123, the remote vehicle, and the roadway device 151 are located in a roadway environment 140. The roadway environment is a portion of the real-world that includes a roadway)
wherein the system is configured to perform road anomaly detection in either a first mode in which the end computing device identifies road anomalies or a second mode in which the roadside computing device identifies road anomalies (see at least Higuchi [¶ 156-158] Step 4: The task schedule determines where to execute the environmental perception analysis. In some embodiments, if the ego vehicle is selected at step 4, then the ego vehicle executes the environmental perception analysis. In some embodiments, if the edge server is selected at step 4, then the ego vehicle transmits the sensor data to the edge server and the edge server executes the environmental perception analysis and sends the analysis data to the ego vehicle and other vehicles in the vicinity)
by performing a method of:
by the camera, sensing visual data with respect to a surface of the road, wherein the visual data is a live feed captured using the camera (see at least Higuchi [¶ 197, 50] the sensor data 195 includes, among other things, one or more of the following: lidar data (i.e., depth information) recorded by an ego vehicle; or camera data (i.e., image information) recorded by the ego vehicle)… the roadway environment 140 includes objects. Examples of objects include one or of the following: other automobiles, road surfaces)
upon a first instruction received by the roadside computing device, determining that the system performs road anomaly detection in the first mode; upon a first criterion being met, transmitting, by the end computing device, to the roadside computing device, a mode change request to switch the system from the first mode to the second mode; upon a second criterion being met, transmitting, by the end computing device, to the roadside computing device, a mode change request to switch the system from the second mode to the first mode (see at least Higuchi [¶ 130, 156-158] Determine that the ego vehicle should offload responsibility for executing an environmental perception analysis to another computing entity (i.e., a “selected computing entity”)…this determination is made based on whether the ego vehicle has a clear sensor field of view and how the performance data and the network data compare to the conditions data…Step 4: The task schedule determines where to execute the environmental perception analysis. In some embodiments, if the ego vehicle is selected at step 4, then the ego vehicle executes the environmental perception analysis. In some embodiments, if the edge server is selected at step 4, then the ego vehicle transmits the sensor data to the edge server and the edge server executes the environmental perception analysis and sends the analysis data to the ego vehicle and other vehicles in the vicinity) The invention disclosed in Higuchi discusses a determination for whether the ego vehicle should process environmental data itself or offload it to an edge server based on criteria such as the sensor’s view condition and performance data, this is analogous two operating in two modes based on at least two types of criteria
transmitting, by the roadside computing device, a notification to multiple end computing devices in the proximity of the roadside computing device, wherein the notification is an anomaly-specific notification that includes at least one of a message indicating a presence of the road anomaly, an image of the road anomaly, a location of the road anomaly. or a severity of the road anomaly (see at least Higuchi [¶ 158, 50, 196, 210] if the edge server is selected at step 4, then the ego vehicle transmits the sensor data to the edge server and the edge server executes the environmental perception analysis and sends the analysis data to the ego vehicle and other vehicles in the vicinity...the roadway environment 140 includes objects...The physical environment may include a roadway region, parking lot, or parking garage that is proximate to the ego vehicle 123. The sensor data may describe measurable aspects of the physical environment...the perception system 199 include code and routines that are operable, when executed by the processor 125, to cause the processor to: analyze (1) GPS data describing the geographic location of the ego vehicle 123 and (2) sensor data describing the range separating the ego vehicle 123 from an object and a heading for this range; and determine, based on this analysis, GPS data describing the location of the object).
However, Higuchi does not explicitly teach wherein depending on whether the system performs road anomaly detection in the first mode or the second mode, either the end computing device or the roadside computing device identifies road anomalies by:
inputting the visual data to a machine learning (ML) model trained to detect and classify a road anomaly, wherein the road anomaly includes a pothole, longitudinal cracks, transverse cracks, or alligator cracks on the road
determining multiple features of the detected road anomaly, wherein the features include a size of the road anomaly
assessing a severity of the road anomaly based on the features.
Young, in the same field as the endeavor, teaches wherein depending on whether the system performs road anomaly detection in the first mode or the second mode, either the end computing device or the roadside computing device identifies road anomalies by:
inputting the visual data to a machine learning (ML) model trained to detect and classify a road anomaly, wherein the road anomaly includes a pothole, longitudinal cracks, transverse cracks, or alligator cracks on the road (see at least Young [¶ 6, 30] causing the apparatus to identify, within the data, the visual indication of the road surface anomaly includes causing the apparatus to: process data from the image sensor using a machine learning model, and identify, within the data from the image sensor, the visual indication of the road surface anomaly using the machine learning model.....the collection of image data from sensors of a vehicle and to extract relevant images indicative of road surface anomalies for localization, map building, and vehicle control…The irregularities may vary by lane and even position within a lane, such as a pothole, a crack, bump (e.g., road upheave), or may be consistent within an entire road segment, such as a seam that reaches across all lanes of a road)
determining multiple features of the detected road anomaly, wherein the features include a size of the road anomaly (see at least Young [¶ 30-31, 55] The irregularities may vary by lane and even position within a lane, such as a pothole, a crack, bump…road surface anomaly identification and storage may include the creation of road surface anomaly features from images and geo-referencing them to a map…the anomaly may be relatively severe…FIG. 4 illustrates a road segment 300 with an anomaly 302 in the form of a dip in the road surface. An oil track 304 is formed past the dip in the direction of travel 310, and a second oil track 306 is also formed. This second, smaller oil track 306 is a result of vehicles bouncing after the anomaly 302. This can be caused by a larger anomaly)
assessing a severity of the road anomaly based on the features (see at least Young [¶ 66] The presence of road surface anomalies can be provided to municipalities or other entity responsible for road maintenance. The severity of the road surface anomaly can be identified through visual analysis).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for inputting the visual data to a machine learning (ML) model trained to detect and classify a road anomaly, determining multiple features of the detected road anomaly, wherein the features include a size of the road anomaly, and wherein the road anomaly includes a pothole, longitudinal cracks, transverse cracks, or alligator cracks on the road, and assessing a severity of the road anomaly based on the features with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving safety of the travelling vehicle as discussed in Young [¶ 26] Autonomous vehicles leverage sensor information relating to roads to determine safe regions of a road to drive and to evaluate their surroundings as they traverse a road segment).
Regarding Claim 21, Higuchi in view of Young teaches all limitations of Claim 1 as set forth above. Higuchi further teaches upon a second instruction received by the roadside computing device, determining that the system performs road anomaly detection in the second mode (see at least Higuchi [¶ 130, 156-158] Determine that the ego vehicle should offload responsibility for executing an environmental perception analysis to another computing entity (i.e., a “selected computing entity”)…this determination is made based on whether the ego vehicle has a clear sensor field of view and how the performance data and the network data compare to the conditions data…Step 4: The task schedule determines where to execute the environmental perception analysis. In some embodiments, if the ego vehicle is selected at step 4, then the ego vehicle executes the environmental perception analysis. In some embodiments, if the edge server is selected at step 4, then the ego vehicle transmits the sensor data to the edge server and the edge server executes the environmental perception analysis and sends the analysis data to the ego vehicle and other vehicles in the vicinity).
Regarding Claim 22, Higuchi in view of Young teaches all limitations of Claim 1 as set forth above. Higuchi further teaches wherein the first criterion includes one or more of: an amount of the visual data being larger than a predetermined threshold, a time required for the end computing device to process the visual data being longer than a predetermined threshold, or a data storage space required for the end computing device to process the visual data being larger than a data storage space available to the end computing device (see at least Higuchi [¶ 7, 131, 133] The method where the conditions are selected from a group that includes: the sensor data indicates that the sensor has an unclear field of view; the processor of the ego vehicle has a computing power that is less than the different endpoint; the sensor of the ego vehicle has a perception ability that is less than the different endpoint; and the network data indicates that offloading the responsibility would satisfy a latency threshold. The method where the steps for offloading the responsibility are scheduled by a task scheduler of the ego vehicle based on a set of criteria…an ego vehicle will offload responsibility for executing the environmental perception analysis if: the ego vehicle has a clear sensor field of view; inadequate computing power (as indicated by the relative performance data of all the candidate endpoints)…step 7 is executed by the task scheduler based on the following example decision criteria:…computation latency for the server to complete the environmental perception analysis).
Regarding Claim 23, Higuchi in view of Young teaches all limitations of Claim 1 as set forth above. While Higuchi does not explicitly teach wherein the second criterion includes: an amount of the visual data being smaller than or equal to a predetermined threshold, a time required for the end computing device to process the visual data being shorter than or equal to a predetermined threshold, and a data storage space required for the end computing device to process the visual data being smaller than or equal to a data storage space available to the end computing device, Higuchi does teach wherein the ego vehicle computes the environmental perception analysis itself if the ego-vehicle has adequate computing power and will offload it if the ego-vehicle’s computing power is inadequate (see at least Higuchi [¶ 131] an ego vehicle will offload responsibility for executing the environmental perception analysis if: the ego vehicle has a clear sensor field of view; inadequate computing power (as indicated by the relative performance data of all the candidate endpoints)).
Therefore, because an amount of data being smaller or equal to a threshold, a time required to compute a task being shorter than a threshold, and required storage space being smaller than a threshold are all common and well known conditions for determining if a task can be properly handled by a computing entity and its computing power, it would have been obvious to anyone of ordinary skill in the art to combine all three as a condition to be met in order to allow the weaker of two computing entities to perform the required task.
Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Higuchi et al (US 20220053308 A1) in view of Young et al (US 20240200974 A1) and Chung et al (KR 20220075999 A). Hereafter referred to as Higuchi, Young, and Chung respectively.
Regarding Claim 2, Higuchi in view of Young teaches all limitations of Claim 1 as set forth above. However, Higuchi does not explicitly teach wherein the anomaly-specific notification includes the severity, and the severity is determined based on the size of the road anomaly.
Young, in the same field as the endeavor, teaches wherein the anomaly-specific notification includes the severity, and the severity is determined based on the size of the road anomaly (see at least Young [¶ 38, 67] the apparatus 20 may use road surface anomaly information to present information to a user via the user interface 28 such as a warning regarding a pothole or the like so the user can take any necessary actions (if they are driving the vehicle), or to advise the user of any upcoming movement of the vehicle due to road surface anomalies…Road surface anomalies in map data can optionally be used to provide route guidance to avoid such anomalies. An entire road may be avoided if feasible for a route if the road surface anomalies are sufficiently bad. Optionally, lane level guidance can be provided to a driver or an autonomous vehicle to avoid road surface anomalies. Certain vehicles may wish to avoid any road surface anomaly that is above a predetermined threshold of severity).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for wherein the anomaly-specific notification includes the severity, and the severity is determined based on the size of the road anomaly with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving safety of the travelling vehicle as discussed in Young [¶ 26] Autonomous vehicles leverage sensor information relating to roads to determine safe regions of a road to drive and to evaluate their surroundings as they traverse a road segment).
Further, Higuchi does not explicitly teach wherein the size is determined based on dimensions of a smallest bounding box that encloses a portion of a frame of the visual data having the road anomaly.
Chung, in the same field as the endeavor, teaches wherein the size is determined based on dimensions of a smallest bounding box that encloses a portion of a frame of the visual data having the road anomaly (see at least Chung [English Translation pg.4 para.1] The pothole classification unit 150 detects the characteristics of the pothole in the model generated through the loss function. For example, the pothole classifying unit 150 may compare the size of the extracted candidate area and the surrounding area of the bounding box, and detect the size of the pothole through the coordinates of the bounding box. In addition, the pothole classifying unit 150 may detect the length, height, area, etc. of the pothole).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for wherein the size is determined based on dimensions of a smallest bounding box that encloses a portion of a frame of the visual data having the road anomaly with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the object detection capabilities of the system by utilizing a method that is well known and used in the art.
Regarding Claim 3, Higuchi in view of Young and Chung teaches all limitations of Claim 2 as set forth above. However, Higuchi does not explicitly teach wherein the severity is determined to be of a first value when an area of the bounding box exceeds a specified threshold area and of a second value when the area does not exceed the specified threshold area, wherein the first value is indicative of a more severe road anomaly than the second value.
Chung, in the same field as the endeavor, teaches wherein the severity is determined to be of a first value when an area of the bounding box exceeds a specified threshold area and of a second value when the area does not exceed the specified threshold area, wherein the first value is indicative of a more severe road anomaly than the second value (see at least Chung [English Translation pg.2 para.9, pg.4 para.4] The pre-processing unit 110 performs pre-processing on the received road damage image. Specifically, the pre-processing unit 110 performs a pre-processing process of detecting and removing objects excluding the pothole area from the road damage image....If the length, height, and area of the detected pothole are larger than the preset threshold value, the pothole classification unit 150 determines that it is a pothole and outputs the classification result as 1, otherwise it is determined that it is not a pothole and the classification result is set to 0).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Young to contain a system for wherein the severity is determined to be of a first value when an area of the bounding box exceeds a specified threshold area and of a second value when the area does not exceed the specified threshold area, wherein the first value is indicative of a more severe road anomaly than the second value with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of filtering out smaller road anomalies that can be ignored when they are not larger than a predetermined threshold, this may in turn improve both the computing efficiency of the system and the driving of the driver in the travelling vehicle, by not needing to process or send a warning for every instance of a road anomaly.
Further, Higuchi does not explicitly teach wherein the specified threshold area is determined as a percentage of an area of the frame.
However, Chung teaches using a bounding box and area thresholds of the bounding boxes to identify road anomalies (see at least Chung [English Translation pg.2 para.9, pg.4 para.4]).
Therefore, the combination of Higuchi, Young, and Chung disclose the claimed invention except for wherein the specified threshold area is determined as a percentage of an area of the frame. It would have been obvious to anyone of ordinary skill in the art before the effective filing date of the claimed invention to have included such a method since it has been held to be within the general skill of a worker in the art to select using a specified threshold area that is determined as a percentage of an area of the frame based on its suitability for the intended use as a matter of design choice.
Claims 4 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Higuchi et al (US 20220053308 A1) in view of Young et al (US 20240200974 A1), Kimura et al (US 20240152876 A1), Shih (US 20170330043 A1), and Uliyar et al (US 10275669 B2). Hereafter referred to as Higuchi, Young, Kimura, Shih, and Uliyar respectively.
Regarding Claim 4 and Claim 20, Higuchi in view of Young teaches all limitations of the method of Claim 1 and the system of Claim 19 as set forth above. However, Higuchi does not explicitly teach determining a count of road anomalies detected in a sequence of frames of the visual data
wherein determining the count includes:
determining a frame span interval (FSI) based on frames per second of the camera and speed of the vehicle,
determining a specified number of frames to be skipped from multiple frames of the visual data based on the FSI, wherein the specified number of frames to be skipped when the FSI is greater than a specified threshold is lesser than the specified number of frames to be skipped when the FSI is lesser than the specified threshold,
selecting a next frame of the sequence for detecting the road anomaly based on the specified number of frames to be skipped.
Kimura, in the same field as the endeavor, teaches determining a count of road anomalies detected in a sequence of frames of the visual data (see at least Kimura [¶ 40] The deterioration detection unit 23 detects road deterioration based on at least one of the image and the acceleration included in the sensor information. Here, the deterioration detection unit 23 detects, for example, a pothole as road deterioration…The deterioration detection unit 23 may detect road deterioration based on an index indicating road deterioration. In this case, as the index, for example, a cracking rate, a rutting amount, flatness, a maintenance control index (MCI), an international roughness index (IRI), or the like is used. The deterioration detection unit 23 detects the index as road deterioration when the value of the index exceeds a predetermined threshold).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for determining a count of road anomalies detected in a sequence of frames of the visual data with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the notification system by only notifying the driver when a road is substantially damaged or deteriorated as discussed in Kimura, this may improve the driving of the driver as well by limiting the number of notifications presented to the driver during driving, which may be distracting.
Shih, in the same field as the endeavor, teaches wherein determining the count includes: determining a frame span interval (FSI) based on frames per second of the camera and speed of the vehicle (see at least Shih [¶ 61] A necessary count for image mapping and a frame interval corresponding to a specific velocity of a vehicle are calculated via the image mapping calculator 9011 and the frame interval calculator 9012. Thus the image mapping calculator 9011 determines a least quantity N.sub.least for image mapping while the frame interval calculator 9012 determines a quantity Nand the frame interval based on parameters including at least one of the velocity of the vehicle and a sampling rate of the plurality of images 900, said 30 frames per second among these continuous images. The image mapping module 901 can obtain a velocity value, a length value, a distance value and a sampling rate value. The velocity value, the length value, the distance value and the sampling rate value respectively represent the velocity v, the dash length L, and the distance S and the sampling rate (or a frame rate). For example, the frame interval is determined based on the velocity value, the length value, the distance value and the sampling rate value)
Uliyar, in the same field as the endeavor, teaches determining a specified number of frames to be skipped from multiple frames of the visual data based on the FSI, wherein the specified number of frames to be skipped when the FSI is greater than a specified threshold is lesser than the specified number of frames to be skipped when the FSI is lesser than the specified threshold and selecting a next frame of the sequence for detecting the road anomaly based on the specified number of frames to be skipped (see at least Uliyar [Column 6 Lines 33-44] the predetermined number of neighboring frames may be not necessarily be the consecutive frames in the neighborhood of the frame in which the object is detected in the coarse search, and the frames may be skipped by a second skip factor (S2) between two neighboring frames. However, it noted that the second skip factor (S2) must be smaller than the first skip factor (S1) that was used for selecting the non-consecutive frames during the coarse detection. For instance, in an example, if the value of ‘S1’ is 10, the value of ‘S2’ can be 1 or 2. In one form, the predetermined number of neighboring frames may be selected based on the following expression (1))
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for determining a FSI based on FPS and vehicle speed, determining a number of frames to skip and skipping said frames with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of reducing complexity of the anomaly detection as discussed in Uliyar (see at least Uliyar [Column 19, Lines 60-65] optimization at the system level by managing frame assignment to simultaneously executing algorithms, resulting in reduced complexity and memory requirements).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Higuchi et al (US 20220053308 A1) in view of Young et al (US 20240200974 A1), Misawa et al (US 20180023965 A1) and Kimura et al (US 20240152876 A1). Hereafter referred to as Higuchi, Young, Misawa, and Kimura respectively.
Regarding Claim 5, Higuchi in view of Young teaches all limitations of Claim 1 as set forth above. However, Higuchi does not explicitly teach wherein transmitting the notification comprises: determining that a count of road anomalies detected exceeds a specified threshold.
Kimura, in the same field as the endeavor, teaches wherein transmitting the notification comprises: determining that a count of road anomalies detected exceeds a specified threshold (see at least Kimura [¶ 40] The deterioration detection unit 23 detects road deterioration based on at least one of the image and the acceleration included in the sensor information. Here, the deterioration detection unit 23 detects, for example, a pothole as road deterioration…The deterioration detection unit 23 may detect road deterioration based on an index indicating road deterioration. In this case, as the index, for example, a cracking rate, a rutting amount, flatness, a maintenance control index (MCI), an international roughness index (IRI), or the like is used. The deterioration detection unit 23 detects the index as road deterioration when the value of the index exceeds a predetermined threshold).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for wherein transmitting the notification comprises: determining that a count of road anomalies detected exceeds a specified threshold with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the notification system by only notifying the driver when a road is substantially damaged or deteriorated as discussed in Kimura, this may improve the driving of the driver as well by limiting the number of notifications presented to the driver during driving, which may be distracting.
Further, Higuchi does not explicitly teach, sending a general notification to the end computing devices in the proximity of the roadside computing device, wherein the general notification is indicative of a presence of multiple road anomalies in the proximity of the end computing devices.
Misawa, in the same field as the endeavor teaches sending a general notification to the end computing devices in the proximity of the roadside computing device, wherein the general notification is indicative of a presence of multiple road anomalies in the proximity of the end computing devices (see at least Misawa [¶ 106-108] the CPU 21 extracts causality information indicating an anomaly of the anomaly transition node having causality with the anomaly occurrence node from the causality storing database 43, and counts the number of extracted anomalies for respective anomaly transition nodes having the causality.... the CPU 21 uses the number of extracted anomalies having causality to generate an anomaly transition map by mapping the transition of the anomaly. In S340, the CPU 21 transmits the generated anomaly transition map to the roadside unit 3 disposed in the vicinity of the anomaly occurrence node and the roadside unit 3 disposed in the vicinity of the node around the anomaly occurrence node, then halts the anomaly estimation process…Then, the roadside unit 3 that has received the anomaly transition map transmits the received anomaly transition map to the in-vehicle unit 5 installed in a vehicle running in the vicinity of the roadside unit 3).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for sending a general notification to the end computing devices in the proximity of the roadside computing device, wherein the general notification is indicative of a presence of multiple road anomalies in the proximity of the end computing devices with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the safety of the driver by including the quantity of road anomalies in the warning to the driver, thus making the driver aware of the number of hazards that may potentially need to be accounted for during driving.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Higuchi et al (US 20220053308 A1) in view of Young et al (US 20240200974 A1), Misawa et al (US 20180023965 A1). Hereafter referred to as Higuchi, Young, and Misawa respectively.
Regarding Claim 6, Higuchi in view of Young teaches all limitations of Claim 1 as set forth above. However, Higuchi does not explicitly teach wherein the ML model includes a first ML model implemented at the end computing device.
Young, in the same field as the endeavor teaches wherein the ML model includes a first ML model is implemented at the end computing device (see at least Young [¶ 6, 59] causing the apparatus to identify, within the data, the visual indication of the road surface anomaly includes causing the apparatus to: process data from the image sensor using a machine learning model, and identify, within the data from the image sensor, the visual indication of the road surface anomaly using the machine learning model…This image analysis can be performed in real-time as a vehicle is traveling along a road segment)
Further, Higuchi does not explicitly teach, wherein the end computing device is configured to, in the first mode, stream portions of the visual data containing the detected road anomaly to the roadside computing device.
Misawa, in the same field as the endeavor, teaches wherein the end computing device is configured to, in the first mode, stream portions of the visual data containing the detected road anomaly to the roadside computing device. (see at least Misawa [¶ 54, 137] the CPU 21 identifies the type of the anomaly at the anomaly occurrence node by using the image data obtained at the date and time when the anomaly occurred and at the anomaly occurrence node determined in S310….The vehicle behavior data collection section 51 repeatedly collects the in-vehicle unit position information, the driving data, the behavior data, and the image data from the plurality of in-vehicle units 5 via the roadside units 3).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for wherein the end computing device is configured to stream portions of the visual data containing the detected road anomaly to the roadside computing device with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of reducing the computing load of the vehicle system by transmitting the visual data to the remote, roadside device, to process roadside anomaly data.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Higuchi et al (US 20220053308 A1) in view of Young et al (US 20240200974 A1), Misawa et al (US 20180023965 A1), and Golestani et al (WO 2023211994 A1). Hereafter referred to as Higuchi, Young, Misawa, and Golestani respectively.
Regarding Claim 7, Higuchi in view of Young teaches all limitations of Claim 1 as set forth above. However, Higuchi does not explicitly teach wherein the ML model, further includes a second ML mode, is implemented at the roadside computing device.
Golestani, in the same field as the endeavor, teaches wherein the ML model, further includes a second ML mode, is implemented at the roadside computing device (see at least Golestani [English Translation pg.23 para.5] Processors of any number of electronic devices (e.g., roadside nodes, modules, servers, vehicles) may execute some or all of the software programming operations, features, or functions of the method 400…the roadside nodes include processors that execute software for performing the same or different data analytics or machine-learning operations on data from various data sources, such as sensors or other modules of the roadside node, other roadside nodes, or vehicles).
Further, Higuchi does not explicitly teach wherein the end computing device is configured to, in the second mode, stream portions of the visual data containing the detected road anomaly to the roadside computing device.
Misawa, in the same field as the endeavor, teaches wherein the end computing device is configured to, in the second mode, stream portions of the visual data containing the detected road anomaly to the roadside computing device (see at least Misawa [¶ 54, 137] the CPU 21 identifies the type of the anomaly at the anomaly occurrence node by using the image data obtained at the date and time when the anomaly occurred and at the anomaly occurrence node determined in S310….The vehicle behavior data collection section 51 repeatedly collects the in-vehicle unit position information, the driving data, the behavior data, and the image data from the plurality of in-vehicle units 5 via the roadside units 3).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Higuchi to contain a system for wherein the ML model further includes a second ML mode and is implemented at the end computing device and wherein the end computing device is configured to, in the second mode, stream portions of the visual data containing the detected road anomaly to the roadside computing device with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of reducing the computing load of the vehicle system by transmitting the visual data to the remote, roadside device, to process roadside anomaly data.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH A YANOSKA whose telephone number is (703)756-5891. The examiner can normally be reached M-F 9:00am to 5:00pm (Pacific Time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached on (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH ANDERSON YANOSKA/Examiner, Art Unit 3664
/RACHID BENDIDI/Supervisory Patent Examiner, Art Unit 3664