DETAILED ACTION
Notice of AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitation interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim 41
“means for obtaining a desired destination …”
“means for obtaining operational design domain information…”
“means for generating routing information…”
Accompanying structure in Specification: [00126]-[00128]: “the processor 610”
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-14, 24-27, and 31-42 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
In January, 2019 (updated October 2019), the USPTO released new examination guidelines setting forth a two-step inquiry for determining whether a claim is directed to non-statutory subject matter. According to the guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that claim 1 is directed toward non-statutory subject matter, as shown below:
STEP 1: Does claim fall within one of the statutory categories? Yes. The claim is directed toward a Process which falls within one of the statutory categories.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? Yes, the claim is directed to an abstract idea.
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations;
Example: iv. organizing information and manipulating information through mathematical correlations, Digitech Image Techs., LLC v. Electronics for Imaging, Inc., 758 F.3d 1344, 1350, 111 USPQ2d 1717, 1721 (Fed. Cir. 2014). The patentee in Digitech claimed methods of generating first and second data by taking existing information, manipulating the data using mathematical functions, and organizing this information into a new form. The court explained that such claims were directed to an abstract idea because they described a process of organizing information through mathematical correlations, like Flook's method of calculating using a mathematical formula. 758 F.3d at 1350, 111 USPQ2d at 1721.
Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
See claim language below:
A method for generating routing information for a vehicle configured with an advanced driver assistance system, the method comprising:
obtaining a desired destination;
obtaining operational design domain information based at least in part on a geographic area comprising a present location and the desired destination; and
generating routing information based at least in part on collaboration information for one or more driving assistance functions associated with the operational design domain information, the collaboration information comprising indications of physical actions performed by vehicle operators when the one or more driving assistance functions are activated.
The Process in claim 1, specifically the limitations bolded above, is a mental process that can be practicably performed in the human mind with the aid of a pencil and paper and, therefore, an abstract idea. It merely consists of generating routing information. This is equivalent to, having received a route destination and information regarding which local road segments support autonomous driving, mentally generating a route to take to reach the destination that takes into account past behavior of the driver during autonomous driving. If the driver has been attentive during autonomous driving in the past, the route could maximize the number of road segments that support autonomous driving to be included in it, and if the driver has been inattentive during autonomous driving in the past, the route could instead choose road segments that result in the fastest route, regardless of support for autonomous driving.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? No, the claim does not recite additional elements, underlined above, that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Claim 1 does not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. The steps of “obtaining a desired destination…” and “obtaining operational design domain information…” are recited at a high level of generality and amount to mere data gathering, which is a form of insignificant extra solution activity.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No, the claim does not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
Claim 1 does not recite any specific limitation or combination of limitations that are not well- understood, routine, conventional (WURC) activity in the field.
Mere data communication steps that can be performed entirely on any one or more generic computer/-s have also been previously identified by the courts as an abstract idea (i.e. a judicial exception): (A) Receiving and/or transmitting data is considered to be well-understood, routine, or conventional at least as evidenced by MPEP § 2106.05(d)(II)(i) "Receiving or transmitting data over a network", and (iv) "Storing and retrieving information in memory" and (B) Comparing the received data to other data is considered to be well-understood, routine or conventional at least as evidenced by MPEP§ 2106.05(d)(II)(ii) "Performing repetitive calculations".
CONCLUSION
Thus, since claim 1 is: (a) directed toward an abstract idea, (b) does not recite additional elements that integrate the judicial exception into a practical application, and (c) does not recite additional elements that amount to significantly more than the judicial exception, it is clear that claim 1 is directed towards non-statutory subject matter.
Additionally, Claims 2-14, 24-27, and 31-42:
fall within one of the statutory categories (Claims 2-14: Process, Claims 24-27 and 41-42: Machine)
directed toward an abstract idea (Mental Process),
do not recite additional elements that integrate the judicial exception into a practical application, and
do not recite additional elements that amount to significantly more than the judicial exception.
Therefore, it is clear that Claims 2-14, 24-27, and 31-42 are directed towards non-statutory subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public
use, on sale, or otherwise available to the public before the effective filing date of the
claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 4-5, 7-14, 24-27, 32-33, 35-42 are rejected under 35 U.S.C 102(a)(2) as being anticipated by Moustafa et al. (US 20220126864 A1).
Regarding Claim 1, Moustafa teaches A method for generating routing information for a vehicle (see at least FIG. 1: vehicle 105; [0166]: “Vehicles … may be provided with varying levels of autonomous driving capabilities”) configured with an advanced driver assistance system (see at least FIG. 2: autonomous driving system 210), the method comprising:
obtaining a desired destination (see at least [0920], FIG. 148: point B);
obtaining operational design domain information (see at least [0919]: “The handoff forecast (HOF) module 14725”; [0919]: “The HOF module can consider road conditions, such as, for example, accidents, overcrowded roads, public events, pedestrians, construction, etc. to determine where and when a handoff from an autonomous driver to a human driver may be needed.”) based at least in part on a geographic area (see at least [0919]: “The HOF module can receive, e.g., local map and route information”) comprising a present location (see at least FIG. 148: Point A) and the desired destination (see at least [0924]: “the HOF module 14725 may determine the handoff locations along the route”); and
generating routing information (see at least [0935]: “If the driver is not ready to take over when prompted, the HOH module 14730 can assess whether there are alternatives to a handoff. This can include, for example, taking an alternate route, …, etc. If there are alternatives, then an alternative can be chosen.”) based at least in part on collaboration information (see at least [0520]: “prior to handing off a vehicle to a human driver, the state of the driver (e.g., fatigue level, level of alertness, emotional condition, or other state) is analyzed to improve safety of the handoff process. Handing off control suddenly to a person who is not ready could prove to be more dangerous than not handing off at all”) for one or more driving assistance functions (see at least [1975]: “autonomous driving”; “determining that autonomous control of the vehicle should cease comprises predicting, using a particular machine learning model, that conditions on an upcoming section of the path plan presents difficulties to autonomous driving during the upcoming section.”) associated with the operational design domain information, the collaboration information comprising indications of physical actions (see at least [0523]: “the one or more cameras or computing systems coupled to the cameras may implement AI algorithms to detect face, eyebrow, or eye movements and extract features to track a level of fatigue and alertness indicated by the detected features.”) performed by vehicle operators when (see at least [1921]: “determining at least one handoff location of an autonomous vehicle to a driver on a route; … receiving information pertaining to a current state of attention of the driver; and determining the expected driver behavior during each of the at least one handoff locations”) the one or more driving assistance functions are activated.
Regarding Claim 2, Moustafa teaches The method of claim 1, wherein the operational design domain information is comprised in map information (see at least [0919]: “The HOF module can receive, e.g., local map and route information with real-time traffic, accident, hazard, and road maintenance updates.”) received from a network resource (see at least [0251]: “cloud-based knowledge reflecting troublesome segments of road may be communicated to … in-vehicle road maps to indicate the trouble segments to drivers and other autonomous vehicles”).
Regarding Claim 4, Moustafa teaches The method of claim 1, wherein at least one of the indications of physical actions performed by vehicle operators is a duration of time required by a vehicle operator to react (see at least [0926]: “To collect the information, the EAO Module 14735 can use the following example criteria at each handoff event along the route: how long it took the driver to respond to a hand off request”) to an alert generated by a vehicle safety system.
Regarding Claim 5, Moustafa teaches The method of claim 1, wherein at least one of the indications of physical actions performed by vehicle operators is a duration of time required by a vehicle operator to react to a silent failure (see at least [0212]: “recommendation system may be utilized to generate alerts for presentation on the vehicle's … graphic displays, such as to … prepare one or more passengers for a handover or pullover event”; [0519]: “autonomous vehicles may be subject to equipment failure … necessitating … pullover of the vehicle.”).
Regarding Claim 7, Moustafa teaches The method of claim 1, wherein at least one of the indications of physical actions performed by vehicle operators is a head posture (see at least [0202]: “sensors positioned within the vehicle may also contribute to the sense phase 605 of the pipeline provide information such as biometrics of the passengers (e.g., … posture”; [0528]: “In particular embodiments, activity labels may be derived from the sensor data by an activity classification model. For example, the model may detect whether the driver is … feeling sick (e.g., … driver shown in image data with head bent down)”) of a vehicle operator.
Regarding Claim 8, Moustafa teaches The method of claim 1, wherein at least one of the indications of physical actions performed by vehicle operators is a level of interaction (see at least [0524]: “sensor data 6804 may include or be based on pressure data collected from tactile or haptic sensors on the steering wheel, accelerator …. In some embodiments, a computing system coupled to such tactile or haptic sensors may implement AI algorithms to analyze such pressure data to track the level of alertness or other physical state of the driver.”) a vehicle operator performs with one or more vehicle systems.
Regarding Claim 9, Moustafa teaches The method of claim 1, wherein the operational design domain information is based at least in part on a road classification (see at least [0982]: “The driver may even prioritize routes where higher levels of autonomy aren't needed, like highway driving (that can be achieved with minimal set of sensors.)”) for at least a portion of the geographic area.
Regarding Claim 10, Moustafa teaches The method of claim 1, wherein the operational design domain information is based at least in part on a road design factors (see at least [0920]: “HOF module 14725 may consider road conditions such as … road construction sites to determine where a handoff to the human driver may be needed.”) for at least a portion of the geographic area.
Regarding Claim 11, Moustafa teaches The method of claim 1, wherein generating the routing information comprises receiving the collaboration information (see at least [0913]: “The generic occupant capability (“GOC”) database 14720 can include data related to statistical information of the characteristic of a generic driver similar to the actual driver of the autonomous vehicle.”; [0914]: “Examples of the types of data in the GOC database can include the amount of time it takes for a characteristic driver (e.g., a person having similar characteristics, e.g., age, gender, etc. as the driver) to: respond to a prompt”) from a network resource (see at least [0913]: “the GOC database 14720 can be external to the vehicle and made available to the autonomous vehicle over the cloud.”).
Regarding Claim 12, Moustafa teaches The method of claim 11, wherein the network resource is network server configured to communicate with a cellular network (see at least [0236]: “An autonomous vehicle system may support communication with a variety of different devices and services through a variety of different communication technologies (e.g., … cellular data, etc.) and may further base offload determinations on the detected communication channel technologies available within an environment and the potential data offload or sharing partners (e.g., connecting to … an edge computer server or cloud service through 5G, etc.).”).
Regarding Claim 13, Moustafa teaches The method of claim 1, wherein the one or more driving assistance functions comprise (see at least [0594]-[0595]: “the car is in L2 mode. If the car once again needs to lower its autonomy level, this time to L1, the driver will need to take over. Therefore, the vehicle may send out a takeover signal”) Keep distance (KD), Speed Keep Assist (SKA), Lane Keep Assist (LKA), Stop at stop sign (SaSS), Stop and go at traffic light (SGTL), Adapt speed and trajectory to road geometry (ASTRG), Lane Change Assist (LCA), Change lane (CL), Hands-free driving option (HFO) (see at least [0198]: “L2 vehicles (e.g., 415) provide driver assistance functionality, which allow the driver to occasionally disengage from physically operating the vehicle, such that both the hands and feet of the driver may disengage periodically from the physical controls of the vehicle.”), Give right of way (GROW), Stop and give right of way (SGROW), Emergency change lane (ECL), Keep lane (KL), and Keep speed (KS), or combinations thereof.
Regarding Claim 14, Moustafa teaches The method of claim 1,further comprising:
obtaining physical action information for an operator of the vehicle with one or more operator monitoring sensors (see at least [0541]: “an example framework may consider the different situations under which it is safer … a human driver to take control of the vehicle …The autonomous vehicle may be equipped with cameras or other internal sensors (e.g., microphones) that may be used to sense the awareness state of the driver (e.g., determine whether the driver is distracted by a phone call, or feeling sleepy/drowsy)”) in the vehicle; and
providing, to a network resource, the physical action information (see at least [0249]: “the vehicle may provide data to … cloud-based systems … describing the conditions which precipitated the handover request (e.g., 1610).”), an indication of at least one driver assistance function (see at least [0978]: “Examples of inputs that can affect the “L” score can include: … the user experience 15140”; [0581]: “in situations where there is a necessary autonomy level change…, a complete record of the level change and data relating to the vehicles movements, planning, autonomy level, etc. can be sent to and stored by the surveillance system 7810.”; FIG. 78: remote surveillance center 7810) that is active at a time the physical action information is obtained, and location (see at least [0693]: “datasets of vehicle data collected by the vehicle are provided to cloud vehicle data system 10120”; “data collected from the vehicle may be formed into datasets, tagged, and provided”; “although techniques exist to enable geographic (geo) tagging in the cloud, it is often performed by a vehicle because image capturing devices may contain global positioning systems and provide real-time information related to the location of subjects.”) information at the time the physical action information is obtained.
Regarding Claim 24, Moustafa teaches An apparatus (see at least FIG. 2: vehicle 105), comprising:
at least one memory (see at least FIG. 2: memory 206);
at least one transceiver (see at least FIG. 2: comm modules 212);
at least one processor (see at least FIG. 2: processors 202) communicatively coupled to the at least one memory (see at least [0318]: “the processor 2803 may be configured to execute or interpret software, scripts, programs, functions, executables, or other instructions stored in the memory 2804.”) and the at least one transceiver (see at least [0173]: “These various processors 202, accelerators 204, memory devices 206, and network communication modules 212, may be interconnected”), and configured to:
obtain a desired destination (see at least [0920], FIG. 148: point B);
obtain operational design domain information (see at least [0919]: “The handoff forecast (HOF) module 14725”; [0919]: “The HOF module can consider road conditions, such as, for example, accidents, overcrowded roads, public events, pedestrians, construction, etc. to determine where and when a handoff from an autonomous driver to a human driver may be needed.”) based at least in part on a geographic area (see at least [0919]: “The HOF module can receive, e.g., local map and route information”) comprising a present location (see at least FIG. 148: Point A) and the desired destination (see at least [0924]: “the HOF module 14725 may determine the handoff locations along the route”); and
generate routing information (see at least [0935]: “If the driver is not ready to take over when prompted, the HOH module 14730 can assess whether there are alternatives to a handoff. This can include, for example, taking an alternate route, …, etc. If there are alternatives, then an alternative can be chosen.”) based at least in part on collaboration information (see at least [0520]: “prior to handing off a vehicle to a human driver, the state of the driver (e.g., fatigue level, level of alertness, emotional condition, or other state) is analyzed to improve safety of the handoff process. Handing off control suddenly to a person who is not ready could prove to be more dangerous than not handing off at all”) for one or more driving assistance functions (see at least [1975]: “autonomous driving”; “determining that autonomous control of the vehicle should cease comprises predicting, using a particular machine learning model, that conditions on an upcoming section of the path plan presents difficulties to autonomous driving during the upcoming section.”) associated with the operational design domain information, the collaboration information comprising indications of physical actions (see at least [0523]: “the one or more cameras or computing systems coupled to the cameras may implement AI algorithms to detect face, eyebrow, or eye movements and extract features to track a level of fatigue and alertness indicated by the detected features.”) performed by vehicle operators (see at least [1921]: “determining at least one handoff location of an autonomous vehicle to a driver on a route; … receiving information pertaining to a current state of attention of the driver; and determining the expected driver behavior during each of the at least one handoff locations”) when the one or more driving assistance functions are activated.
Regarding Claim 25, Moustafa teaches The apparatus of claim 24, wherein the at least one processor is further configured to obtain map information (see at least [0919]: “The HOF module can receive, e.g., local map and route information with real-time traffic, accident, hazard, and road maintenance updates.”) comprising the operational design domain information.
Regarding Claim 26, Moustafa teaches The apparatus of claim 24, wherein the at least one processor is further configured to receive the collaboration information (see at least [0913]: “The generic occupant capability (“GOC”) database 14720 can include data related to statistical information of the characteristic of a generic driver similar to the actual driver of the autonomous vehicle.”; [0914]: “Examples of the types of data in the GOC database can include the amount of time it takes for a characteristic driver (e.g., a person having similar characteristics, e.g., age, gender, etc. as the driver) to: respond to a prompt”) from a network resource (see at least [0913]: “the GOC database 14720 can be external to the vehicle and made available to the autonomous vehicle over the cloud.”).
Regarding Claim 27, Moustafa teaches The apparatus of claim 24,
wherein the at least one processor is further configured to:
obtain physical action information for an operator of the vehicle with one or more operator monitoring sensors (see at least [0541]: “an example framework may consider the different situations under which it is safer … a human driver to take control of the vehicle …The autonomous vehicle may be equipped with cameras or other internal sensors (e.g., microphones) that may be used to sense the awareness state of the driver (e.g., determine whether the driver is distracted by a phone call, or feeling sleepy/drowsy)”) in the vehicle; and
provide the physical action information (see at least [0249]: “the vehicle may provide data to … cloud-based systems … describing the conditions which precipitated the handover request (e.g., 1610).”), an indication of at least one driver assistance function (see at least [0978]: “Examples of inputs that can affect the “L” score can include: … the user experience 15140”; [0581]: “in situations where there is a necessary autonomy level change…, a complete record of the level change and data relating to the vehicles movements, planning, autonomy level, etc. can be sent to and stored by the surveillance system 7810.”; FIG. 78: remote surveillance center 7810), and location (see at least [0693]: “datasets of vehicle data collected by the vehicle are provided to cloud vehicle data system 10120”; “data collected from the vehicle may be formed into datasets, tagged, and provided”; “although techniques exist to enable geographic (geo) tagging in the cloud, it is often performed by a vehicle because image capturing devices may contain global positioning systems and provide real-time information related to the location of subjects.”) information to a network resource.
Regarding Claim 32, Moustafa teaches The apparatus of claim 24, wherein at least one of the indications of physical actions performed by vehicle operators is a duration of time required by a vehicle operator to react (see at least [0926]: “To collect the information, the EAO Module 14735 can use the following example criteria at each handoff event along the route: how long it took the driver to respond to a hand off request”) to an alert generated by a vehicle safety system.
Regarding Claim 33, Moustafa teaches The apparatus of claim 24, wherein at least one of the indications of physical actions performed by vehicle operators is a duration of time required by a vehicle operator to react to a silent failure (see at least [0212]: “recommendation system may be utilized to generate alerts for presentation on the vehicle's … graphic displays, such as to … prepare one or more passengers for a handover or pullover event”; [0519]: “autonomous vehicles may be subject to equipment failure … necessitating … pullover of the vehicle.”).
Regarding Claim 35, Moustafa teaches The apparatus of claim 24, wherein at least one of the indications of physical actions performed by vehicle operators is a head posture (see at least [0202]: “sensors positioned within the vehicle may also contribute to the sense phase 605 of the pipeline provide information such as biometrics of the passengers (e.g., … posture”; [0528]: “In particular embodiments, activity labels may be derived from the sensor data by an activity classification model. For example, the model may detect whether the driver is … feeling sick (e.g., … driver shown in image data with head bent down)”) of a vehicle operator.
Regarding Claim 36, Moustafa teaches The apparatus of claim 24, wherein at least one of the indications of physical actions performed by vehicle operators is a level of interaction (see at least [0524]: “sensor data 6804 may include or be based on pressure data collected from tactile or haptic sensors on the steering wheel, accelerator …. In some embodiments, a computing system coupled to such tactile or haptic sensors may implement AI algorithms to analyze such pressure data to track the level of alertness or other physical state of the driver.”) a vehicle operator performs with one or more vehicle systems.
Regarding Claim 37, Moustafa teaches The apparatus of claim 24,
wherein the operational design domain information is based at least in part on a road classification (see at least [0982]: “The driver may even prioritize routes where higher levels of autonomy aren't needed, like highway driving (that can be achieved with minimal set of sensors.)”) for at least a portion of the geographic area.
Regarding Claim 38, Moustafa teaches The apparatus of claim 24, wherein the operational design domain information is based at least in part on a road design factors (see at least [0920]: “HOF module 14725 may consider road conditions such as … road construction sites to determine where a handoff to the human driver may be needed.”) for at least a portion of the geographic area.
Regarding Claim 39, Moustafa teaches The apparatus of claim 24, wherein the one or more driving assistance functions include (see at least [0594]-[0595]: “the car is in L2 mode. If the car once again needs to lower its autonomy level, this time to L1, the driver will need to take over. Therefore, the vehicle may send out a takeover signal”) Keep distance (KD), Speed Keep Assist (SKA), Lane Keep Assist (LKA), Stop at stop sign (SaSS), Stop and go at traffic light (SGTL), Adapt speed and trajectory to road geometry (ASTRG), Lane Change Assist (LCA), Change lane (CL), Hands-free driving option (HFO) (see at least [0198]: “L2 vehicles (e.g., 415) provide driver assistance functionality, which allow the driver to occasionally disengage from physically operating the vehicle, such that both the hands and feet of the driver may disengage periodically from the physical controls of the vehicle.”), Give right of way (GROW), Stop and give right of way (SGROW), Emergency change lane (ECL), Keep lane (KL), and Keep speed (KS), or combinations thereof.
Regarding Claim 40, Moustafa teaches The apparatus of claim 26, wherein the network resource is network server configured to communicate with a cellular network (see at least [0236]: “An autonomous vehicle system may support communication with a variety of different devices and services through a variety of different communication technologies (e.g., … cellular data, etc.) and may further base offload determinations on the detected communication channel technologies available within an environment and the potential data offload or sharing partners (e.g., connecting to … an edge computer server or cloud service through 5G, etc.).”).
Regarding Claim 41, Moustafa teaches An apparatus for generating routing information for a vehicle (see at least FIG. 1: vehicle 105; [0166]: “Vehicles … may be provided with varying levels of autonomous driving capabilities”) configured with an advanced driver assistance system (see at least FIG. 2: autonomous driving system 210), comprising:
means for obtaining a desired destination (see at least [0920], FIG. 148: point B);
means for obtaining operational design domain information (see at least [0919]: “The handoff forecast (HOF) module 14725”; [0919]: “The HOF module can consider road conditions, such as, for example, accidents, overcrowded roads, public events, pedestrians, construction, etc. to determine where and when a handoff from an autonomous driver to a human driver may be needed.”) based at least in part on a geographic area (see at least [0919]: “The HOF module can receive, e.g., local map and route information”) comprising a present location (see at least FIG. 148: Point A) and the desired destination (see at least [0924]: “the HOF module 14725 may determine the handoff locations along the route”); and
means for generating routing information (see at least [0935]: “If the driver is not ready to take over when prompted, the HOH module 14730 can assess whether there are alternatives to a handoff. This can include, for example, taking an alternate route, …, etc. If there are alternatives, then an alternative can be chosen.”) based at least in part on collaboration information (see at least [0520]: “prior to handing off a vehicle to a human driver, the state of the driver (e.g., fatigue level, level of alertness, emotional condition, or other state) is analyzed to improve safety of the handoff process. Handing off control suddenly to a person who is not ready could prove to be more dangerous than not handing off at all”) for one or more driving assistance functions (see at least [1975]: “autonomous driving”; “determining that autonomous control of the vehicle should cease comprises predicting, using a particular machine learning model, that conditions on an upcoming section of the path plan presents difficulties to autonomous driving during the upcoming section.”) associated with the operational design domain information, the collaboration information comprising indications of physical actions (see at least [0523]: “the one or more cameras or computing systems coupled to the cameras may implement AI algorithms to detect face, eyebrow, or eye movements and extract features to track a level of fatigue and alertness indicated by the detected features.”) performed by vehicle operators when (see at least [1921]: “determining at least one handoff location of an autonomous vehicle to a driver on a route; … receiving information pertaining to a current state of attention of the driver; and determining the expected driver behavior during each of the at least one handoff locations”) the one or more driving assistance functions are activated.
Regarding Claim 42, Moustafa teaches A non-transitory processor-readable storage medium (see at least FIG. 2: memory 206) comprising processor-readable instructions configured to cause one or more processors (see at least FIG. 2: processors 202) to (see at least [0171]: “the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller or processor to perform predetermined operations”) generate routing information for a vehicle (see at least FIG. 2: vehicle 105) configured with an advanced driver assistance system (see at least FIG. 2: autonomous driving system 210), comprising:
code for obtaining a desired destination (see at least [0920], FIG. 148: point B);
code for obtaining operational design domain information (see at least [0919]: “The handoff forecast (HOF) module 14725”; [0919]: “The HOF module can consider road conditions, such as, for example, accidents, overcrowded roads, public events, pedestrians, construction, etc. to determine where and when a handoff from an autonomous driver to a human driver may be needed.”) based at least in part on a geographic area (see at least [0919]: “The HOF module can receive, e.g., local map and route information”) comprising a present location (see at least FIG. 148: Point A) and the desired destination (see at least [0924]: “the HOF module 14725 may determine the handoff locations along the route”); and
code for generating routing information (see at least [0935]: “If the driver is not ready to take over when prompted, the HOH module 14730 can assess whether there are alternatives to a handoff. This can include, for example, taking an alternate route, …, etc. If there are alternatives, then an alternative can be chosen.”) based at least in part on collaboration information (see at least [0520]: “prior to handing off a vehicle to a human driver, the state of the driver (e.g., fatigue level, level of alertness, emotional condition, or other state) is analyzed to improve safety of the handoff process. Handing off control suddenly to a person who is not ready could prove to be more dangerous than not handing off at all”) for one or more driving assistance functions (see at least [1975]: “autonomous driving”; “determining that autonomous control of the vehicle should cease comprises predicting, using a particular machine learning model, that conditions on an upcoming section of the path plan presents difficulties to autonomous driving during the upcoming section.”) associated with the operational design domain information, the collaboration information comprising indications of physical actions (see at least [0523]: “the one or more cameras or computing systems coupled to the cameras may implement AI algorithms to detect face, eyebrow, or eye movements and extract features to track a level of fatigue and alertness indicated by the detected features.”) performed by vehicle operators when (see at least [1921]: “determining at least one handoff location of an autonomous vehicle to a driver on a route; … receiving information pertaining to a current state of attention of the driver; and determining the expected driver behavior during each of the at least one handoff locations”) the one or more driving assistance functions are activated.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 3 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Moustafa et al. (US 20220126864 A1) in view of Telpaz et al. (US 11587461 B2).
Regarding claim 3, Moustafa teach The method of claim 1, wherein at least one of the indications of physical actions performed by vehicle operators is a (see at least [0592]: “the vehicle can confirm driver engagement though the use of certain sensors and monitoring. For example, the vehicle can use gaze monitoring”) of a vehicle operator is directed
However, Moustafa does not explicitly teach duration of time; to an area other than a road the vehicle is traveling on.
Telpaz teach wherein at least one of the indications of physical actions performed by vehicle operators is a duration of time (see at least (19) column 6 line 67-column 7 line 2: “The gaze pattern detection, at block 220 (FIG. 2), may be used to determine times when the driver is glancing on-road and off-road.”) for which an eye gaze of a vehicle operator is directed to an area other than a road the vehicle is traveling on.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Moustafa to incorporate the teachings of Telpaz to monitor time of misdirected eye gaze. Doing so would appropriately adjust allowed time for a user to have their gaze away from the road in semi-autonomous driving, as recognized by Telpaz in (2) column 1 lines 8-21 and column 3 lines 37-48.
Regarding claim 31, Moustafa teach The apparatus of claim 24, wherein at least one of the indications of physical actions performed by vehicle operators is a (see at least [0592]: “the vehicle can confirm driver engagement though the use of certain sensors and monitoring. For example, the vehicle can use gaze monitoring”) of a vehicle operator is directed
However, Moustafa does not explicitly teach duration of time; to an area other than a road the vehicle is traveling on.
Telpaz teach wherein at least one of the indications of physical actions performed by vehicle operators is a duration of time (see at least (19) column 6 line 67-column 7 line 2: “The gaze pattern detection, at block 220 (FIG. 2), may be used to determine times when the driver is glancing on-road and off-road.”) for which an eye gaze of a vehicle operator is directed to an area other than a road the vehicle is traveling on.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Moustafa to incorporate the teachings of Telpaz to monitor time of misdirected eye gaze. Doing so would appropriately adjust allowed time for a user to have their gaze away from the road in semi-autonomous driving, as recognized by Telpaz in (2) column 1 lines 8-21 and column 3 lines 37-48.
Claims 6 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Moustafa et al. (US 20220126864 A1) in view of Peeyush et al. (US 20250056188 A1).
Regarding claim 6, Moustafa teach The method of claim 1, wherein at least one of the indications of physical actions performed by vehicle operators is (see at least [0541]: “The autonomous vehicle may be equipped with cameras or other internal sensors (e.g., microphones) that may be used to sense the awareness state of the driver (e.g., determine whether the driver is distracted by a phone call”).
However, Moustafa does not explicitly teach a duration of time.
Peeyush teach a duration of time (see at least [0047]: “When the user 112 interacts with the phone sensor array 114 for periods of time 107 exceeding eight seconds, the driver distraction artificial intelligence 115 can predict a minor distraction”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Moustafa to incorporate the teachings of Peeyush to monitor an amount of time a user is distracted by his phone. Doing so would “improv[e] safety”, as recognized by Peeyush in paragraph [0042].
Regarding claim 34, Moustafa teach The apparatus of claim 24, wherein at least one of the indications of physical actions performed by vehicle operators is (see at least [0541]: “The autonomous vehicle may be equipped with cameras or other internal sensors (e.g., microphones) that may be used to sense the awareness state of the driver (e.g., determine whether the driver is distracted by a phone call”).
However, Moustafa does not explicitly teach a duration of time.
Peeyush teach a duration of time (see at least [0047]: “When the user 112 interacts with the phone sensor array 114 for periods of time 107 exceeding eight seconds, the driver distraction artificial intelligence 115 can predict a minor distraction”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Moustafa to incorporate the teachings of Peeyush to monitor an amount of time a user is distracted by his phone. Doing so would “improv[e] safety”, as recognized by Peeyush in paragraph [0042].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Mehrotra et al. (US 20240294180 A1) teaches a system that track gaze cues and duration to indicate driver trust level during autonomous driving (see paragraphs [0091]-[0092]).
Safour et al. (US 20200290646 A1) teaches an autonomous driving system that confirms driver steering wheel contact and eye gaze after receiving a driver takeover request (See paragraphs [0077] and [0079]).
Kajima (US 20170368936 A1) teaches an autonomous vehicle system that adjusts timing of a take-over process based on the level of fatigue of a driver (see paragraph [0158]).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE ALCORN whose telephone number is (571) 270-3763. The examiner can normally be reached M-F, 9:30 am – 6:30 pm est.
Examiner Interview are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at (571) 270-3415. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GEORGE A ALCORN III/Examiner, Art Unit 3662
/JELANI A SMITH/Supervisory Patent Examiner, Art Unit 3662