Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim(s) 1 - 20 are pending for examination.
This Action is made NON-FINAL.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are “said computing module is used to…” as recited in claims 1 and 13, “said communication module is configured to…” as recited in claim 19, and “said data module is configured to…” as recited in claim 20.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
Regarding the computing module the specification states page 5 “a FAS computing module comprising a database engine server, an identity authentication server, a data management and exchange server, an optimized distribution subsystem server, and/or a web application server”
Thus the structure associated with the computing module will be interpreted as at least one processor and memory as a server needs at least a processor and memory.
Regarding the communication module the specification states page 3 “The communication module comprises a terminal communication module. In some embodiments, the terminal communication module supports multiple communication modes, e.g., 3G, 4G, 5G, or 6G cellular communications; GPS; and/or WIFI (IEEE 802.11)”
Thus the structure associated with the communication module be interpreted as transceiver circuitry.
Regarding the data module the specification states page 5 “a FAS data module comprising a database server”
Thus the structure associated with the data module be interpreted as at least one processor and memory as a server needs at least a processor and memory.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recites sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 3 and 19 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In claim 3, the limitation “wherein said FAS allocates sensing functions, decision-making functions, and/or control functions to said CAH system and/or to said CAH system.” Does not make sense and appears to be a typographical error. Will interpret as said CAH system and/or to said CAV system
The term “highly reliable multi-channel information” in claim 19 is a relative term which renders the claim indefinite. The term “highly reliable multi-channel information” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. What is highly reliable is subjective. At what point is the information not highly reliable? How many data packets must be dropped for the data to no longer be considered reliable let alone highly reliable.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claim(s) 1-12 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 1, 8, 15-16, 19-21, and 24-25 of US patent 12077175.
Table has been created below to compare claims of the instant application and claims of US patent 12077175 side by side.
Instant Application 18809933
US Patent 12077175
1. A function allocation system (FAS), comprising: a communication module; a data module; and a computing module, wherein said computing module is used to: analyze a scene according to sensing data; analyze automated driving function requirements for a number of scenes; deploy a function allocation method; and analyze functioning of a connected automated highway (CAH) system and a connected automated vehicle (CAV) system; wherein said function allocation method determines a function allocation of sensing functions, decision-making functions, and/or control functions; and wherein said FAS is configured to allocate sensing functions, decision-making functions, and/or control functions to a CAV system and to a CAH system according to said function allocation.
15. A function allocation system (FAS) comprising: a communication module; a data module; and a computing module, wherein said FAS is configured to perform a function allocation method comprising: analyzing a scene; analyzing system functional demands; analyzing system functional restrictions; determining a function allocation using a function demand-constraint matching algorithm, wherein determining a function allocation using a function demand-constraint matching algorithm comprises: calculating a function of limitation vectors K.sub.A,n,w; calculating a limitation function of the CAH system for a main scene A: F(I).sub.A,n,w; calculating a limitation function of the CAV system for the main scene A: F(V).sub.A,n,w; and providing a function allocation to provide automated driving for CAV in the main scene A according to:
PNG
media_image1.png
46
222
media_image1.png
Greyscale
allocating sensing functions, decision making functions, and control functions to a connected automated vehicle (CAV) system and to a connected automated highway (CAH) system according to said function allocation.
2. The FAS of claim 1, wherein a connected automated vehicle highway (CAVH) system comprises the CAV system, the CAH system, and the FAS.
25. A CAVH system comprising a connected automated highway (CAH) system, a connected automated vehicle (CAV) system, and a function allocation system (FAS) of claim 15.
3. The FAS of claim 1, wherein said FAS allocates sensing functions, decision-making functions, and/or control functions to said CAH system and/or to said CAH system.
15….. allocating sensing functions, decision making functions, and control functions to a connected automated vehicle (CAV) system and to a connected automated highway (CAH) system according to said function allocation.
4. The FAS of claim 1, wherein said FAS is configured to allocate sensing functions,decision-making functions, and/or control functions to said CAV system having a vehicle intelligence level V and to said CAH system having an infrastructure intelligence level I to provide a system intelligence level S for said CAVH system to manage automated driving.
16. The FAS of claim 15, wherein said FAS is configured to allocate sensing functions, decision-making functions, and control functions to said CAV system having a vehicle intelligence level V and to said CAH system having an infrastructure intelligence level I to provide a system intelligence level S for said CAVH system to manage automated driving.
5. The FAS of claim 1, wherein said CAH system comprises a sensing module, a decision-making module, a control module, and a communication module.
19. The FAS of claim 15, wherein said CAH system comprises a sensing module, a decision making module, a control module, and a communication module.
6. The FAS of claim 1, wherein said CAV system comprises a sensing module, a decision-making module, a control module, and a communication module.
20. The FAS of claim 15, wherein said CAV system comprises a sensing module, a decision-making module, a control module, and a communication module.
7. The FAS of claim 1, wherein said function allocation method comprises analyzing a scene; analyzing system functional demands; analyzing system functional restrictions; and determining the function allocation using a function demand- constraint matching algorithm.
15….. analyzing a scene; analyzing system functional demands; analyzing system functional restrictions; determining a function allocation using a function demand-constraint matching algorithm,…..
8. The FAS of claim 7, wherein analyzing a scene comprises dividing a scene A into multiple sub-scenes {A1, A2, A3, A4}, wherein A1 represents road facility characteristics of a road in the scene A; A2 represents road geometry characteristics of the road in the main scene A; A3 represents traffic flow characteristics of the road in the scene A; and A4 represents weather characteristics of the road in the scene A.
21. The FAS of claim 15, wherein analyzing a scene comprises dividing a main scene A into multiple sub-scenes {A.sub.1, A.sub.2, A.sub.3, A.sub.4}, where A.sub.1 represents road facility characteristics of a road in the main scene; A.sub.2 represents road geometry characteristics of the road in the scene; A.sub.3 represents traffic flow characteristics of the road in the scene; and A.sub.4 represents weather characteristics of the road in the scene.
9. The FAS of claim 7, wherein analyzing system functional demands comprises constructing a required feature set {Bn,Cw}, wherein Bn represents a control level and C, represents a function feature; and constructing a scene requirement feature set Sm,n,w = (Am, Bn,Cw}, wherein Am represents a sub-scene, Bn represents said control level, and C, represents said function feature.
1….. wherein analyzing system functional demands comprises constructing a required feature set {B.sub.n, C.sub.w}, where B.sub.n represents a control level and C.sub.w represents a function feature; and constructing a scene requirement feature set S.sub.m,n,w={A.sub.m, B.sub.n, C.sub.w}, where A.sub.m represents a sub-scene, B.sub.n represents said control level, and C.sub.w represents said function feature;…
10. The FAS of claim 7, wherein analyzing system functional restrictions comprises analyzing functional limitations of the CAH system for a sub-scene; constructing a limitation function Im,n,w of the CAH system for said sub-scene; analyzing functional limitations of the CAV system for said sub-scene; and constructing a limitation function Vm,n,w of the CAV system for said sub-scene.
8…. wherein analyzing system functional restrictions comprises analyzing functional limitations of the CAH system for a sub-scene; constructing a limitation function I.sub.m,n,w of the CAH system for said sub-scene; analyzing functional limitations of the CAV system for said sub-scene; and constructing a limitation function V.sub.m,n,w of the CAV system for said sub-scene;….
11. The FAS of claim 7, wherein determining the function allocation using a function demand-constraint matching algorithm comprises: calculating a function of limitation vectors
K.sub.A,n,w; calculating a limitation function of the CAH system for a main scene A: F(I).sub.A,n,w; calculating a limitation function of the CAV system for the main scene A: F(V).sub.A,n,w; and providing a function allocation to provide automated driving for CAV in the main scene A according to:
PNG
media_image1.png
46
222
media_image1.png
Greyscale
15. A function allocation system (FAS) comprising: a communication module; a data module; and a computing module, wherein said FAS is configured to perform a function allocation method comprising: analyzing a scene; analyzing system functional demands; analyzing system functional restrictions; determining a function allocation using a function demand-constraint matching algorithm, wherein determining a function allocation using a function demand-constraint matching algorithm comprises: calculating a function of limitation vectors K.sub.A,n,w; calculating a limitation function of the CAH system for a main scene A: F(I).sub.A,n,w; calculating a limitation function of the CAV system for the main scene A: F(V).sub.A,n,w; and providing a function allocation to provide automated driving for CAV in the main scene A according to:
PNG
media_image1.png
46
222
media_image1.png
Greyscale
allocating sensing functions, decision making functions, and control functions to a connected automated vehicle (CAV) system and to a connected automated highway (CAH) system according to said function allocation.
12. The FAS of claim 7, further comprising repeating the function allocation method when the scene changes.
24. The FAS of claim 15, further comprising repeating the function allocation method when a main scene A changes.
Although the claims at issue are not identical, they are not patentably distinct from each other because both inventions are directed to function allocation system. Claim(s) 1-12 are rejected based on claim(s) 1, 8, 15-16, 19-21, and 24-25 of US patent 12077175. Minor differences can be seen and noted in the table above, however it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of US patent 12077175 to produce the system of the instant application.
This is a nonstatutory double patenting rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 7-12, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ran et al. (US 20190096238 A1; hereinafter known as Ran) in view of Altintas et al. (US 20200019445 A1, hereinafter known as Altintas).
Regarding Claim 1, Ran teaches A function allocation system (FAS) comprising:
{Para [0114] “In some embodiments, the IRIS contains high performance computation capability to allocate computation power to realize sensing, prediction, planning and decision making, and control, specifically, at three levels”
}
a communication module;
a data module; and
{Para [0070-0072] “In some embodiments, the RSU has one or more module configurations including, but not limited to: a. Sensing module for driving environment detection; b. Communication module for communication with vehicles, TCUs and cloud via wired or wireless media; c. Data processing module that processes the data from the sensing and communication module;”
}
a computing module,
{Para [0160] “FIG. 13 shows an exemplary IRIS Computation Flowchart. 1301: Data Collected From RSU, including but not limited to image data, video data, radar data, On-board unit data. 1302: Data Allocation Module, allocating computation resources for various data processing. 1303 Computation Resources Module for actual data processing.”
}
wherein said computing module is used to: analyze a scene according to sensing data;
{Para [0111] “In some embodiments, the systems and methods provide a virtual traffic light control function. In some such embodiments, a cloud-based traffic light control system, characterized by including sensors in road side such as sensing devices, control devices and communication devices. In some embodiments, the sensing components of RSUs are provided on the roads (e.g, intersections) for detecting road vehicle traffic, for sensing devices associated with the cloud system over a network connection, and for uploading information to the cloud system. The cloud system analyzes the sensed information and sends information to vehicles through communication devices.”
Detecting road traffic and analyzing it can be considered as analyzing a scene.
}
deploy a function allocation method;
wherein said function allocation method determines a function allocation of sensing functions, decision-making functions, and/or control functions;
and wherein said FAS is configured to allocate sensing functions, decision-making functions, and/or control functions to a CAV system and to a CAH system according to said function allocation.
{Para [0027] “c. The vehicle OBUs, as above, collect vehicle generated data, such as vehicle movement and condition and send to RSUs, and receive inputs from the RSUs. Based on the inputs from RSU, OBU facilitates vehicle control. When the vehicle control system fails, the OBU may take over in a short time period to stop the vehicle safely.”
Where having vehicle control performed selectively by the vehicle OBU and the RSU can be considered allocating the control function between the CAV and CAVH.
Sensing functions and decision-making functions are also allocated as discussed in Para [0114] “In some embodiments, the IRIS contains high performance computation capability to allocate computation power to realize sensing, prediction, planning and decision making, and control”
}
Ran does not teach, analyze automated driving function requirements for a number of scenes; analyze functioning of a connected automated highway (CAH) system and a connected automated vehicle (CAV) system;
However, Altintas teaches analyze automated driving function requirements for a number of scenes;
{Para [0007] “In general, another innovative aspect of the subject matter described in this disclosure may be embodied in systems comprising: one or more processors; one or more memories storing instructions that, when executed by the one or more processors, cause the system to: receive a computational task; determine a processing resource requirement of the computational task; determine, from vehicles on a road segment, candidate participant vehicles proximally located relative to one another on the road segment at a first timestamp; determine vehicle movement data of the candidate participant vehicles; determine available processing resources of a candidate temporal vehicular virtual server (TVVS) at the first timestamp, the candidate TVVS comprising the candidate participant vehicles at the first timestamp; estimate available processing resources of the candidate TVVS at a second timestamp based on the vehicle movement data of the candidate participant vehicles, the second timestamp being subsequent to the first timestamp; determine that the computational task is executable on the candidate TVVS based on the processing resource requirement of the computation task, the available processing resources of the candidate TVVS at the first timestamp, and the estimated available processing resources of the candidate TVVS at the second timestamp; responsive to determining that the computational task is executable on the candidate TVVS, instruct the candidate participant vehicles to form a TVVS; and assign the computational task to the TVVS to perform an execution of the computational task.”
}
analyze functioning of a connected automated highway (CAH) system and a connected automated vehicle (CAV) system;
{Para [0087] “In block 314, the local management server 107 may process the task request notification. In particular, in the local management server 107, the message processor 202 may analyze the task request notification to extract the task metadata and/or the task description of the computational task. In block 316, the local management server 107 may determine the computing entity to execute the computational task. In some embodiments, the local management server 107 may assign the computational task to an individual vehicle platform 103 or a stationary computing server (e.g., the centralized server 101, the local management server 107 itself, another local server 107, etc.) to perform the task execution. In particular, in the local management server 107, the task manager 206 may retrieve the resource availability data of the individual vehicle platforms 103, the centralized servers 101, the local management server 107, and/or other local servers 107 from the data store 126. As discussed elsewhere herein, the resource availability data may describe the available processing resources of these computing entities. In some embodiments, the task manager 206 may determine the computing entity executing the computational task from these computing entities based on their available processing resources. For example, the computing entity executing the computational task may be the computing entity that has the available processing resources satisfying the processing resource requirement of the computational task indicated in the task metadata.”
}
allocating sensing functions, decision-making functions,
{computational tasks can be allocated to vehicles or servers as discussed para [0048] “In some embodiments, the computational task may be a unit of processing workload to be assigned to and/or executed by a computing entity of the system 100 (e.g., the centralized servers 101, the local servers 107, the TVVSs 109, the vehicle platforms 103, etc.)….. Non-limiting examples of the processing operations include, but are not limited to, data sensing operations, data processing operations, data storing operations, data communicating operations, etc. In some embodiments, each processing operation may itself be considered a computational task and may be performed by different computing entities. Other types of processing operation are also possible and contemplated.”
These servers can be roadside infrastructure as discussed in Para [0057] “The local server 107 includes a hardware and/or virtual server that includes a processor, a memory, and network communication capabilities (e.g., a communication unit). In some embodiments, the local server 107 may manage and/or perform the execution of computational tasks for the computing entities of the system 100. In some embodiments, the local server 107 may have a coverage area 192 including a road segment on which the vehicle platforms 103 and the participant vehicles of multiple TVVSs 109 travel. These vehicle platforms 103 and/or these participant vehicles of the TVVSs 109 may have their computational tasks being managed and/or executed by the local server 107. In some embodiments, the local server 107 may also manage and/or execute the computational tasks for other local servers 107 and the centralized servers 101. In some embodiments, the local server 107 may be implemented as a computing infrastructure located on the roadside of the road segment (e.g., a roadside unit). In some embodiments, the local server 107 may be may be implemented as a stationary computing server located within a predefined distance from the corresponding coverage area 192 (e.g., 30 km).”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ran to incorporate the teachings of Altintas to determine functional requirements and allocate to computing resources based on these because as is discussed in para [0011] of Altintas “The novel technology for managing computational tasks in the vehicle context presented in this disclosure is particularly advantageous in a number of respects. For example, the technology described herein is capable of monitoring and estimating the available processing resources of the computing entities (e.g., temporal vehicular virtual servers), and reassigning a computational task from a current computing entity executing the computational task to another computing entity if necessary. As the computational task is reassigned before the temporal vehicular virtual server currently executing the computational task dissolves and/or before the available processing resources of the current computing entity become insufficient to complete the computational task, the present technology can avoid redundantly assigning the computational task to multiple computing entities of the computing system. As a result, the utilization of the processing resources provided by the computing system can be optimized without increasing the risk of task failure or degrading the quality of service. As a further example, the present technology is capable of assigning the computational tasks to the computing entities based on communication profile of the communication tasks. Therefore, each computational task can be executed by a computing entity that minimizes the communication resources required for receiving and distributing relevant data of the communication task, and/or minimizes the communication bandwidth over communication networks that incur monetary costs for data transmission.”
Regarding Claim 2, Ran in view of Altintas teaches The FAS of claim 1. Altintas further teaches wherein a connected automated vehicle highway (CAVH) system comprises the CAV system, the CAH system, and the FAS.
{Para [0025] FIG. 1A is a block diagram of an example system 100 for managing computational tasks. As shown, the system 100 includes one or more centralized servers 101a . . . 101n, one or more local servers 107a . . . 107n, one or more vehicle platforms 103a . . . 103n, and one or more temporal vehicular virtual servers (TVVS) 109a . . . 109n. As depicted, each TVVS 109 may comprise one or more vehicle platforms 103 connected to each other to form a vehicle cluster, the vehicle platforms 103 in the vehicle cluster may contribute their available processing resources to collaboratively perform the functionalities of the temporal vehicular virtual server (TVVS) 109. The centralized servers 101 and the local servers 107 may be stationary computing servers and may be coupled for electronic communication via a network 105. The TVVS 109 may be communicatively coupled to the network 105 as reflected by signal line 156, communicatively coupled to the local servers 107 as reflected by signal line 160, and communicatively coupled to other TVVSs 109 as reflected by signal line 170. The vehicle platforms 103 may be communicatively coupled to the TVVSs 109 as reflected by signal line 164, communicatively coupled to local servers 107 as reflected by signal line 168, and communicatively coupled to other components of the system 100 via the network 105 through the signal line 166.”
}
Regarding Claim 3, Ran in view of Altintas teaches The FAS of claim 1. Ran further teaches wherein a connected automated vehicle highway (CAVH) system comprises the CAV system, the CAH system, and the FAS.
{ Para [0027] “c. The vehicle OBUs, as above, collect vehicle generated data, such as vehicle movement and condition and send to RSUs, and receive inputs from the RSUs. Based on the inputs from RSU, OBU facilitates vehicle control. When the vehicle control system fails, the OBU may take over in a short time period to stop the vehicle safely.”
Where having vehicle control performed selectively by the vehicle OBU and the RSU can be considered allocating the control function between the CAV and CAVH.
Sensing functions and decision-making functions are also allocated as discussed in Para [0114] “In some embodiments, the IRIS contains high performance computation capability to allocate computation power to realize sensing, prediction, planning and decision making, and control”
}
Regarding Claim 4, Ran in view of Altintas teaches The FAS of claim 1. Ran further teaches wherein said FAS is configured to allocate sensing functions, decision-making functions, and control functions to said CAV system having a vehicle intelligence level V and to said CAH system having an infrastructure intelligence level I to provide a system intelligence level S for said CAVH system to manage automated driving.
{Para [0027] “c. The vehicle OBUs, as above, collect vehicle generated data, such as vehicle movement and condition and send to RSUs, and receive inputs from the RSUs. Based on the inputs from RSU, OBU facilitates vehicle control. When the vehicle control system fails, the OBU may take over in a short time period to stop the vehicle safely.”
Where having vehicle control performed selectively by the vehicle OBU and the RSU can be considered allocating the control function between the CAV and CAVH.
Sensing functions and decision-making functions are also allocated as discussed in Para [0114] “In some embodiments, the IRIS contains high performance computation capability to allocate computation power to realize sensing, prediction, planning and decision making, and control”
Where as both CAV and CAVH have computational capabilities they can be said to a vehicle intelligence level and infrastructure intelligence level respectively. Additionally, since they work collaboratively by-passing data between each other there is a combined intelligence than can be considered a system intelligence level
}
Regarding Claim 5, Ran in view of Altintas teaches The FAS of claim 1. Ran further teaches wherein said CAH system comprises a sensing module, a decision-making module, a control module; and a communication module.
{Para [0070-0072] “In some embodiments, the RSU has one or more module configurations including, but not limited to: a. Sensing module for driving environment detection; b. Communication module for communication with vehicles, TCUs and cloud via wired or wireless media;”
[0026] “TCU/TCC and traffic operation centers provides short-term and long-term transportation behavior prediction and management, planning and decision making, and collecting/processing transportation information with or without cloud information and computing services;”
[0103] “Traffic Control Unit (TCU), realizes real-time vehicle control and data processing functionality, that are highly automated based on preinstalled algorithms.”
TCC stands for traffic Control Center and is shown outside the vehicle in fig. 2. Therefore the infrastructure. As these functions are being performed one on the TCC and one on the TCU they can be considered to be performed on separate modules.
}
Regarding Claim 7, Ran in view of Altintas teaches The FAS of claim 1. Ran further teaches wherein said function allocation method comprises analyzing a scene;
{Para [0007] “In general, another innovative aspect of the subject matter described in this disclosure may be embodied in systems comprising: one or more processors; one or more memories storing instructions that, when executed by the one or more processors, cause the system to: receive a computational task; determine a processing resource requirement of the computational task; determine, from vehicles on a road segment, candidate participant vehicles proximally located relative to one another on the road segment at a first timestamp; determine vehicle movement data of the candidate participant vehicles; determine available processing resources of a candidate temporal vehicular virtual server (TVVS) at the first timestamp, the candidate TVVS comprising the candidate participant vehicles at the first timestamp; estimate available processing resources of the candidate TVVS at a second timestamp based on the vehicle movement data of the candidate participant vehicles, the second timestamp being subsequent to the first timestamp; determine that the computational task is executable on the candidate TVVS based on the processing resource requirement of the computation task, the available processing resources of the candidate TVVS at the first timestamp, and the estimated available processing resources of the candidate TVVS at the second timestamp; responsive to determining that the computational task is executable on the candidate TVVS, instruct the candidate participant vehicles to form a TVVS; and assign the computational task to the TVVS to perform an execution of the computational task.”
}
Altintas further teaches analyzing system functional demands;
{Para [0007] “In general, another innovative aspect of the subject matter described in this disclosure may be embodied in systems comprising: one or more processors; one or more memories storing instructions that, when executed by the one or more processors, cause the system to: receive a computational task; determine a processing resource requirement of the computational task; determine, from vehicles on a road segment, candidate participant vehicles proximally located relative to one another on the road segment at a first timestamp; determine vehicle movement data of the candidate participant vehicles; determine available processing resources of a candidate temporal vehicular virtual server (TVVS) at the first timestamp, the candidate TVVS comprising the candidate participant vehicles at the first timestamp; estimate available processing resources of the candidate TVVS at a second timestamp based on the vehicle movement data of the candidate participant vehicles, the second timestamp being subsequent to the first timestamp; determine that the computational task is executable on the candidate TVVS based on the processing resource requirement of the computation task, the available processing resources of the candidate TVVS at the first timestamp, and the estimated available processing resources of the candidate TVVS at the second timestamp; responsive to determining that the computational task is executable on the candidate TVVS, instruct the candidate participant vehicles to form a TVVS; and assign the computational task to the TVVS to perform an execution of the computational task.”
}
analyzing system functional restrictions;
{Para [0007] “In general, another innovative aspect of the subject matter described in this disclosure may be embodied in systems comprising: one or more processors; one or more memories storing instructions that, when executed by the one or more processors, cause the system to: receive a computational task; determine a processing resource requirement of the computational task; determine, from vehicles on a road segment, candidate participant vehicles proximally located relative to one another on the road segment at a first timestamp; determine vehicle movement data of the candidate participant vehicles; determine available processing resources of a candidate temporal vehicular virtual server (TVVS) at the first timestamp, the candidate TVVS comprising the candidate participant vehicles at the first timestamp; estimate available processing resources of the candidate TVVS at a second timestamp based on the vehicle movement data of the candidate participant vehicles, the second timestamp being subsequent to the first timestamp; determine that the computational task is executable on the candidate TVVS based on the processing resource requirement of the computation task, the available processing resources of the candidate TVVS at the first timestamp, and the estimated available processing resources of the candidate TVVS at the second timestamp; responsive to determining that the computational task is executable on the candidate TVVS, instruct the candidate participant vehicles to form a TVVS; and assign the computational task to the TVVS to perform an execution of the computational task.”
}
determining a function allocation using a function demand-constraint matching algorithm;
{Para [0007] “In general, another innovative aspect of the subject matter described in this disclosure may be embodied in systems comprising: one or more processors; one or more memories storing instructions that, when executed by the one or more processors, cause the system to: receive a computational task; determine a processing resource requirement of the computational task; determine, from vehicles on a road segment, candidate participant vehicles proximally located relative to one another on the road segment at a first timestamp; determine vehicle movement data of the candidate participant vehicles; determine available processing resources of a candidate temporal vehicular virtual server (TVVS) at the first timestamp, the candidate TVVS comprising the candidate participant vehicles at the first timestamp; estimate available processing resources of the candidate TVVS at a second timestamp based on the vehicle movement data of the candidate participant vehicles, the second timestamp being subsequent to the first timestamp; determine that the computational task is executable on the candidate TVVS based on the processing resource requirement of the computation task, the available processing resources of the candidate TVVS at the first timestamp, and the estimated available processing resources of the candidate TVVS at the second timestamp; responsive to determining that the computational task is executable on the candidate TVVS, instruct the candidate participant vehicles to form a TVVS; and assign the computational task to the TVVS to perform an execution of the computational task.”
}
Regarding Claim 8, Ran in view of Altintas teaches The FAS of claim 7. Altintas further teaches wherein analyzing a scene comprises dividing a main scene A into multiple sub-scenes {A1, A2, A3, A4}, where A1 represents the road facility characteristics of a road in the main scene; A2 represents the road geometry characteristics of the road in the scene; A3 represents the traffic flow characteristics of the road in the scene; and A4 represents the weather characteristics of the road in the scene
{Para [0042-0048] “In some embodiments, the sensing functions of an IRIS generate a comprehensive information at real-time, short-term, and long-term scale for transportation behavior prediction and management, planning and decision-making, vehicle control, and other functions. The information includes but is not limited to: [0043] a. Vehicle surrounding, such as: spacing, speed difference, obstacles, lane deviation; [0044] b. Weather, such as: weather conditions and pavement conditions; [0045] c. Vehicle attribute data, such as: speed, location, type, automation level; [0046] d. Traffic state, such as: traffic flow rate, occupancy, average speed; [0047] e. Road information, such as: signal, speed limit; and [0048] f. Incidents collection, such as: occurred crash and congestion.”
Where road signal (assumed to mean traffic light) is a road facility characteristic, and obstacles can be considered as road geometry characteristics. As the information is being divided into A, B, D, and E, it can be considered as they are being looked at as subscenes. Applicant has not described who these subscenes are used in this claim limitation or the processes of subdividing.
}
Regarding Claim 12, Ran in view of Altintas teaches The FAS of claim 7. Altintas further teaches further comprising repeating the function allocation method when the scene changes.
{Para [0109] “IGS. 6A and 6B illustrate a flowchart of an example method 600 for reassigning a computational task. As discussed elsewhere herein, the available processing resources of the TVVS 109 may significantly change over time due to the vehicle movement of the participant vehicles. Therefore, a computational task being executed on the TVVS 109 may need to be reassigned to another computing entity as the available processing resources of the TVVS 109 become insufficient to complete the execution of the computational task. In block 602, the local management server 107 may generate and transmit a task execution instruction to a first TVVS 109a. The task execution instruction may instruct the first TVVS 109a to execute a computational task. As discussed elsewhere herein, the task execution instruction may include the task metadata and the task description of the computational task.”
Vehicle position change e.g. a main scene change causes reallocation.
}
Regarding Claim 14, Ran in view of Altintas teaches The FAS of claim 2. Altintas further teaches configured to provide a collaborative sensing function, a collaborative decision-making function, and/or a collaborative control function to said CAVH system.
{Para [0027] “c FIG. 1A is a block diagram of an example system 100 for managing computational tasks. As shown, the system 100 includes one or more centralized servers 101a . . . 101n, one or more local servers 107a . . . 107n, one or more vehicle platforms 103a . . . 103n, and one or more temporal vehicular virtual servers (TVVS) 109a . . . 109n. As depicted, each TVVS 109 may comprise one or more vehicle platforms 103 connected to each other to form a vehicle cluster, the vehicle platforms 103 in the vehicle cluster may contribute their available processing resources to collaboratively perform the functionalities of the temporal vehicular virtual server (TVVS) 109. The centralized servers 101 and the local servers 107 may be stationary computing servers and may be coupled for electronic communication via a network 105. The TVVS 109 may be communicatively coupled to the network 105 as reflected by signal line 156, communicatively coupled to the local servers 107 as reflected by signal line 160, and communicatively coupled to other TVVSs 109 as reflected by signal line 170. The vehicle platforms 103 may be communicatively coupled to the TVVSs 109 as reflected by signal line 164, communicatively coupled to local servers 107 as reflected by signal line 168, and communicatively coupled to other components of the system 100 via the network 105 through the signal line 166.”
}
Regarding Claim 15, Ran in view of Altintas teaches The FAS of claim 1. Altintas further teaches wherein the FAS receives system-level information and environmental sensing data from the CAV system and the CAH system through the communication module, stores the system-level information and the environmental sensing data in the data module, and transmits the system-level information and the environmental sensing data to the computing module.
{Para [0043-0044] “As depicted in FIG. 1B, the TVVS 109 may be communicatively coupled to other vehicle platforms 103 to send and receive data to and from other vehicle platforms 103 via the V2V connections 164. The TVVS 109 may also be communicatively coupled to other TVVSs 109 to send and receive data to and from other TVVSs 109 via the V2V connections 170. In some embodiments, the TVVS 109 may be communicatively coupled to the local servers 107 to send and receive data to and from the local servers 107 via the V2I connections (e.g., the signal line 160). In particular, in some embodiments, the local server 107 may be a computing infrastructure located on the roadside of the road segment on which the participant vehicles of the temporal vehicular virtual server (TVVS) 109 travel. Therefore, the TVVS 109 may establish the V2I connection 160 with the local server 107 to send and receive data to and from the local server 107. In some embodiments, the TVVS 109 may also be communicatively coupled to the network 105 to send and receive data to and from other components of the system 100 via the network 105. For example, the TVVS 109 may send and receive data to and from the centralized servers 101, the local servers 107, other TVVSs 109, etc. via the network 105 through the network connections (e.g., the signal line 156). As discussed elsewhere herein, in some embodiments, the data transmission via the network 105 through the network connections 156 may incur monetary cost.
In some embodiments, the temporal vehicular virtual server (TVVS) 109 may include a virtual processor, a virtual memory, and a virtual communication unit virtualized from the available processing resources contributed by the participant vehicles of the temporal vehicular virtual server (TVVS) 109. In some embodiments, the temporal vehicular virtual server 109 may include a virtual instance 120p of the task managing application 120 and a virtual data store 128 (not shown). In some embodiments, the participant vehicles of the temporal vehicular virtual server (TVVS) 109 may contribute the data storage resource of their vehicle data stores 123 to form the virtual data store 128. Thus, the virtual data store 128 may include a non-transitory storage medium that stores various types of data for access and/or retrieval by the task managing application 120.”
Para [0072] “FIG. 2B is a structure diagram 250 of the task managing application 120 implemented in various computing entities of the system 100 (e.g., temporal vehicular virtual server (TVVS) 109, the local servers 107, and/or the centralized servers 101, etc.). As depicted in FIG. 2B, if the task managing application 120 is implemented in the TVVS 109, the task managing application 120 may be optionally configured to enable the message processor 202, the task executor 208, the resource manager 210, and disable other components of the task managing application 120. As illustrated, the TVVS 109 may include a resource pool 252 managed by the resource manager 210. The processing resources in the resource pool 252 may include various resource components, e.g., computing resource (e.g., number of CPU cycles), data storage resource, memory resource, communication resource, sensing resource (e.g., sensor data captured by the sensors 113 of the participant vehicles included in the TVVS 109), etc. As discussed elsewhere herein, these resource components may be in the form of virtual resource units that can be allocated to various computational tasks to perform the task execution.”
Para [0037] “The sensor(s) 113 includes any type of sensors suitable for the vehicle platform(s) 103. The sensor(s) 113 may be configured to collect any type of signal data suitable to determine characteristics of the vehicle platform 103 and/or its internal and external environments. Non-limiting examples of the sensor(s) 113 include various optical sensors (CCD, CMOS, 2D, 3D, light detection and ranging (LIDAR), cameras, etc.), audio sensors, motion detection sensors, barometers, altimeters, thermocouples, moisture sensors, infrared (IR) sensors, radar sensors, other photo sensors, gyroscopes, accelerometers, speedometers, steering sensors, braking sensors, switches, vehicle indicator sensors, windshield wiper sensors, geo-location sensors (e.g., GPS sensors), orientation sensor, wireless transceivers (e.g., cellular, WiFi™, near-field, etc.), sonar sensors, ultrasonic sensors, touch sensors, proximity sensors, distance sensors, etc. In some embodiments, one or more sensors 113 may include externally facing sensors provided at the front side, rear side, right side, and/or left side of the vehicle platform 103 in order to capture the situational context surrounding the vehicle platform 103. In some embodiments, as the vehicle platforms 103 may connect to one another to form the temporal vehicular virtual server (TVVS) 109, the TVVS 109 may also have the sensing capabilities provided by the sensors 113 of these multiple vehicle platforms 103.”
}
Regarding Claim 16, Ran in view of Altintas teaches The FAS of claim 1. Ran further teaches wherein a vehicle intelligent unit (VIU) is configured to control a vehicle using data received from a roadside intelligent unit (RIU).
{Para [0041] “In some embodiments, a vehicle control module is used to execute control instructions from an RSU for driving tasks such as, car following and lane changing.”
}
Regarding Claim 17, Ran in view of Altintas teaches The FAS of claim 16. Ran further teaches wherein: the CAH system provides traffic management and vehicle guidance strategies for global optimization of traffic, wherein the traffic management and vehicle guidance strategies include lane-level traffic control measures comprising lane management and variable speed limit control; and the CAV system makes decisions in simple emergencies.
{para [0101] “In some embodiments, the TCCs and TCUs, along with the RSUs, may have a hierarchical structure including, but not limited to: [0102] a. Traffic Control Center (TCC) realizes comprehensive traffic operations optimization, data processing and archiving functionality, and provides human operations interfaces. A TCC, based on the coverage area, may be further classified as macroscopic TCC, regional TCC, and corridor TCC; [0103] b. Traffic Control Unit (TCU), realizes real-time vehicle control and data processing functionality, that are highly automated based on preinstalled algorithms. A TCU may be further classified as Segment TCU and point TCUs based on coverage areas; and [0104] c. A network of Road Side Units (RSUs), that receive data flow from connected vehicles, detect traffic conditions, and send targeted instructions to vehicles, wherein the point or segment TCU can be physically combined or integrated with an RSU.”
Para [0027] “The vehicle OBUs, as above, collect vehicle generated data, such as vehicle movement and condition and send to RSUs, and receive inputs from the RSUs. Based on the inputs from RSU, OBU facilitates vehicle control. When the vehicle control system fails, the OBU may take over in a short time period to stop the vehicle safely. In some embodiments, the vehicle OBU contains one or more of the following modules: (1) a communication module, (2) a data collection module and (3) a vehicle control module.”
}
Regarding Claim 18, Ran in view of Altintas teaches The FAS of claim 1. Ran further teaches herein the VIU is configured to assume control of the vehicle when the vehicle condition and/or traffic condition prevents the automated driving system of the vehicle from driving the vehicle, wherein the vehicle condition and/or traffic condition is an adverse weather condition, a traffic incident, a system failure,and/or a communication failure.
{ para [0101-0104] “In some embodiments, the TCCs and TCUs, along with the RSUs, may have a hierarchical structure including, but not limited to: a. Traffic Control Center (TCC) realizes comprehensive traffic operations optimization, data processing and archiving functionality, and provides human operations interfaces. A TCC, based on the coverage area, may be further classified as macroscopic TCC, regional TCC, and corridor TCC; b. Traffic Control Unit (TCU), realizes real-time vehicle control and data processing functionality, that are highly automated based on preinstalled algorithms. A TCU may be further classified as Segment TCU and point TCUs based on coverage areas; and c. A network of Road Side Units (RSUs), that receive data flow from connected vehicles, detect traffic conditions, and send targeted instructions to vehicles, wherein the point or segment TCU can be physically combined or integrated with an RSU.”
Para [0027] “The vehicle OBUs, as above, collect vehicle generated data, such as vehicle movement and condition and send to RSUs, and receive inputs from the RSUs. Based on the inputs from RSU, OBU facilitates vehicle control. When the vehicle control system fails, the OBU may take over in a short time period to stop the vehicle safely. In some embodiments, the vehicle OBU contains one or more of the following modules: (1) a communication module, (2) a data collection module and (3) a vehicle control module.”
}
Regarding Claim 19, Ran in view of Altintas teaches The FAS of claim 1. Ran further teaches wherein said communication module is configured to provide highly reliable multi-channel information and manage communication of sensing data and/or the function allocation.
{para [0211-0214] “a) communication module which include three communication channels:
Communication with vehicles including DSRC/4G/5G (e.g., MK5 V2X from Cohda Wireless)
Communication with point TCUs including wired/wireless communication (e.g., Optical Fiber from Cablesys)
Communication with cloud including wired/wireless communication with at least 20M total bandwidth
”
Para [0028-0040] states that the communication module communicates data from the data collection module. The data collection module collects both internal and external sensor data.
}
Regarding Claim 20, Ran in view of Altintas teaches The FAS of claim 1. Altintas further teaches wherein said data module is configured to store sensing data and/or to fuse sensing data.
{para [0032] “In the context of the vehicle platform 103, the processor may be an electronic control unit (ECU) implemented in the vehicle platform 103 such as a car, although other types of platform are also possible and contemplated. The ECUs may receive and store the sensor data (e.g., the Global Positioning System (GPS) data), the resource data (e.g., the processing capacity data), etc. as vehicle operation data in the vehicle data store 123 for access and/or retrieval by the task managing application 120. In some implementations, the processor(s) 115 may be capable of generating and providing electronic display signals to the input/output device(s), supporting the display of images, capturing and transmitting images, performing complex tasks including various types of task execution monitoring and resource estimation, etc. In some implementations, the processor(s) 115 may be coupled to the memory(ies) 117 via the bus 154 to access data and instructions therefrom and store data therein. The bus 154 may couple the processor(s) 115 to the other components of the vehicle platform(s) 103 including, for example, the sensor(s) 113, the memory(ies) 117, the communication unit(s) 119, and/or the vehicle data store 123.”
}
Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Ran et al. (US 20190096238 A1; hereinafter known as Ran) in view of Altintas et al. (US 20200019445 A1, hereinafter known as Altintas) and Zhao et al. (US 20220026224 A1; hereinafter Zhao).
Regarding Claim 6, Ran in view of Altintas teaches The FAS of claim 1. Ran further teaches wherein said CAV system comprises a sensing module;
{Para [0024] “In some embodiments, the vehicle OBU contains one or more of the following modules: (1) a communication module, (2) a data collection module and (3) a vehicle control module. Other modules may also be included.”
Para [0036] “In some embodiments, a data collection module collects data from vehicle installed external and internal sensors and monitors vehicle and human status, including but not limited to one or more of: [0037] a. Vehicle engine status; [0038] b. Vehicle speed; [0039] c. Surrounding objects detected by vehicles; and [0040] d. Human conditions.”
}
Ran does not teach, wherein said CAV system comprises a sensing module; a decision-making module; a control module; and a communication module.
However, Zhao teaches wherein said CAV system comprises a sensing module; a decision-making module;
{para [0032] “Although not illustrated in FIG. 1, the in-vehicle control system 150 and/or the vehicle subsystems 140 may include a navigation subsystem 300 (e.g., as shown in FIG. 3) configured to provide navigation instructions to the plurality of vehicle subsystems 104. Further details regarding the subsystem 300 are provided below.”
Para [0074] “At block 430, the decision module 304 can generate lane level navigation information based on the possible routes 208, 210 received from the navigation module 302. For example, the decision module 304 can select the route having the lowest difficulty value for navigation. The decision module 304 may also use additional route information based on data from vehicle sensors in selecting one of the possible routes 208, 210 for navigation. Example vehicle sensor data which can be incorporated into the selection of one of the possible routes 208, 210 includes data indicative of the real-time conditions of one or more of the lane segments of the roadway. For example, the real-time conditions can include traffic data (e.g., traffic speed, pedestrian traffic, etc.), route data (e.g., lane closure, construction, etc.), obstacle data (e.g., a fallen tree, objects fallen off of other vehicles in the road, etc.), etc. In one example, the cameras of the vehicle sensor subsystem 144 can identify stopped traffic in an upcoming lane segment of one or more of the possible routes 208, 210. The decision module 304 can use this data to supplement the difficulty values for each of the possible routes 208, 210 received from the navigation module 302 to potentially select a different one of the possible routes 208, 210 to avoid the stopped traffic.”
As can be seen in fig. 3 the decision module is part of the navigation module. The navigation module is part of the vehicle as discussed in para [0032].
Fig. 1 label 144 shows the sensor module
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ran to incorporate the teachings of Zhao to have the decision module be in the CAV because it allows routing to be conducted more effectively (para [0074] “At block 430, the decision module 304 can generate lane level navigation information based on the possible routes 208, 210 received from the navigation module 302. For example, the decision module 304 can select the route having the lowest difficulty value for navigation. The decision module 304 may also use additional route information based on data from vehicle sensors in selecting one of the possible routes 208, 210 for navigation. Example vehicle sensor data which can be incorporated into the selection of one of the possible routes 208, 210 includes data indicative of the real-time conditions of one or more of the lane segments of the roadway. For example, the real-time conditions can include traffic data (e.g., traffic speed, pedestrian traffic, etc.), route data (e.g., lane closure, construction, etc.), obstacle data (e.g., a fallen tree, objects fallen off of other vehicles in the road, etc.), etc. In one example, the cameras of the vehicle sensor subsystem 144 can identify stopped traffic in an upcoming lane segment of one or more of the possible routes 208, 210. The decision module 304 can use this data to supplement the difficulty values for each of the possible routes 208, 210 received from the navigation module 302 to potentially select a different one of the possible routes 208, 210 to avoid the stopped traffic”)
Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Ran et al. (US 20190096238 A1; hereinafter known as Ran) in view of Altintas et al. (US 20200019445 A1, hereinafter known as Altintas) and Zhao et al. (US 20220026224 A1; hereinafter Zhao).
Regarding Claim 6, Ran in view of Altintas teaches The FAS of claim 1. Ran further teaches wherein said CAV system comprises a sensing module;
{Para [0024] “In some embodiments, the vehicle OBU contains one or more of the following modules: (1) a communication module, (2) a data collection module and (3) a vehicle control module. Other modules may also be included.”
Para [0036] “In some embodiments, a data collection module collects data from vehicle installed external and internal sensors and monitors vehicle and human status, including but not limited to one or more of: [0037] a. Vehicle engine status; [0038] b. Vehicle speed; [0039] c. Surrounding objects detected by vehicles; and [0040] d. Human conditions.”
}
Ran in view of Altintas does not teach, wherein said CAV system comprises a sensing module; a decision-making module; a control module; and a communication module.
However, Zhao teaches wherein said CAV system comprises a sensing module; a decision-making module;
{para [0032] “Although not illustrated in FIG. 1, the in-vehicle control system 150 and/or the vehicle subsystems 140 may include a navigation subsystem 300 (e.g., as shown in FIG. 3) configured to provide navigation instructions to the plurality of vehicle subsystems 104. Further details regarding the subsystem 300 are provided below.”
Para [0074] “At block 430, the decision module 304 can generate lane level navigation information based on the possible routes 208, 210 received from the navigation module 302. For example, the decision module 304 can select the route having the lowest difficulty value for navigation. The decision module 304 may also use additional route information based on data from vehicle sensors in selecting one of the possible routes 208, 210 for navigation. Example vehicle sensor data which can be incorporated into the selection of one of the possible routes 208, 210 includes data indicative of the real-time conditions of one or more of the lane segments of the roadway. For example, the real-time conditions can include traffic data (e.g., traffic speed, pedestrian traffic, etc.), route data (e.g., lane closure, construction, etc.), obstacle data (e.g., a fallen tree, objects fallen off of other vehicles in the road, etc.), etc. In one example, the cameras of the vehicle sensor subsystem 144 can identify stopped traffic in an upcoming lane segment of one or more of the possible routes 208, 210. The decision module 304 can use this data to supplement the difficulty values for each of the possible routes 208, 210 received from the navigation module 302 to potentially select a different one of the possible routes 208, 210 to avoid the stopped traffic.”
As can be seen in fig. 3 the decision module is part of the navigation module. The navigation module is part of the vehicle as discussed in para [0032].
Fig. 1 label 144 shows the sensor module
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ran in view of Altintas to incorporate the teachings of Zhao to have the decision module be in the CAV because it allows routing to be conducted more effectively (para [0074] “At block 430, the decision module 304 can generate lane level navigation information based on the possible routes 208, 210 received from the navigation module 302. For example, the decision module 304 can select the route having the lowest difficulty value for navigation. The decision module 304 may also use additional route information based on data from vehicle sensors in selecting one of the possible routes 208, 210 for navigation. Example vehicle sensor data which can be incorporated into the selection of one of the possible routes 208, 210 includes data indicative of the real-time conditions of one or more of the lane segments of the roadway. For example, the real-time conditions can include traffic data (e.g., traffic speed, pedestrian traffic, etc.), route data (e.g., lane closure, construction, etc.), obstacle data (e.g., a fallen tree, objects fallen off of other vehicles in the road, etc.), etc. In one example, the cameras of the vehicle sensor subsystem 144 can identify stopped traffic in an upcoming lane segment of one or more of the possible routes 208, 210. The decision module 304 can use this data to supplement the difficulty values for each of the possible routes 208, 210 received from the navigation module 302 to potentially select a different one of the possible routes 208, 210 to avoid the stopped traffic”)
Claim(s) 13 is rejected under 35 U.S.C. 103 as being unpatentable over Ran et al. (US 20190096238 A1; hereinafter known as Ran) in view of Altintas et al. (US 20200019445 A1, hereinafter known as Altintas) and Milton (US 20200017117 A1).
Regarding Claim 13, Ran in view of Altintas teaches The FAS of claim 2.
Ran in view of Altintas does not teach, wherein said computing module is configured to calibrate said CAVH system using the sensing data.
However, Zhao teaches wherein said computing module is configured to calibrate said CAVH system using the sensing data.
{para [0129] “Some embodiments may perform a top-view layer machine-learning operation by training one or more of the machine-learning systems described above or by using a trained version of one or more of the machine-learning systems described above. For example, an agent executing on the top-view computing layer may perform a training operation by training a machine-learning system using a first set of local computing layer results from a first data center, a second set of local computing layer results from a second data center and a set of geolocations as inputs. In this example, the first set of local computing layer may be based on data from a first vehicle and a second vehicle, the second set of local computing layer results may be based on data from a third vehicle, and the set of geolocations may include the geolocations corresponding to each of the first vehicle, second vehicle, and third vehicle. A top-view computing neural network may then be trained to determine a top-view control-system adjustment value based on at least one of the local computing layer results, vehicle computing layer results, sensor data from a plurality of vehicles, operator profiles, and roadside sensor data.”
Para [0056] “results from the local computing layer computed using the local computing data center 122 may be further sent to a cloud computing application 140 executing on a top-view computing layer. The top-view computing layer may include or have access to one or more processors that execute a top-view computing agent 148, wherein the top-view computing agent 148 may be a top-view computing application that act as an overriding agent by communicating directly with vehicle agents and local computing agents, without intermediary communications between agents on other layers. The top-view computing agent 148 may perform any of the algorithms or tasks described above. In addition, the top-view computing agent 148 may perform region-wide analysis based on the data received from various data centers in a local computing layer. For example, the top-view computing agent 148 may predict outcome probabilities for entire populations of vehicles or vehicles operators across a region being serviced by numerous data centers operating simultaneously in the region. In some embodiments, the top-view computing agent 148 may determine risk values for regions, vehicle categories, or operator profiles. These risk values can then be correlated to specific incidents or vehicle designs to detect infrastructure optimization or vehicle design optimization.”
Training using sensor data can be considered as calibrating using sensor data as its increasing the accuracy of the system.
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ran in view of Altintas to incorporate the teachings of Milton to calibrate the CAVH with sensor data because it allows for increased functionality (para [0018] “The resulting suite of trained machine-learning models may be used to various ends. In some cases, vehicle manufacturers or fleet operators may configure various adjustable attributes of vehicles to make the vehicles more suitable for a given operator, place, time, or combination thereof. In some cases, governmental agencies may take responsive action to re-configure roadways and other infrastructure responsive to outputs of the machine-learning models, particularly at higher levels of the hierarchy. In some cases, fleet operators (e.g., in trucking, ride-share platforms, or delivery services) may adjust routing of configuration of fleet over geographic areas responsive to outputs of the trained models. In some embodiments, geographic information systems may be enriched to encode attributes of road segments, intersections, and other places of interest inferred by the machine-learning models. In some embodiments, parts makers, like tier 1 parts makers, may adjust the configuration or design of parts based on outputs of the machine-learning models. In some cases, insurance companies and road-side service providers may customize offering for consumers and fleet operators based on outputs of the trained models.”)
Allowable Subject Matter
Claims 9-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER MATTA whose telephone number is (571)272-4296. The examiner can normally be reached Mon - Fri 10:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Lee can be reached at (571) 270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.G.M./Examiner, Art Unit 3668
/JAMES J LEE/Supervisory Patent Examiner, Art Unit 3668