Prosecution Insights
Last updated: April 19, 2026
Application No. 18/069,840

METHOD AND APPARATUS FOR SHARING DRIVING INTENT BETWEEN AUTONOMOUS AND MANUALLY DRIVEN VEHICLES

Non-Final OA §101§103
Filed
Dec 21, 2022
Examiner
KUNTZ, JEWEL A
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Industry-Academic Cooperation Foundation Korea National University Of Transportation
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
80%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
49 granted / 68 resolved
+20.1% vs TC avg
Moderate +8% lift
Without
With
+7.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
35 currently pending
Career history
103
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/09/2025 has been entered. Status of the Claims The claims 1-2, 5-7 and 10 are currently pending and have been examined. Applicant amended claims 1 and 6. Response to Arguments/Amendments The amendment filed October 9, 2025 has been entered. Claims 1-2, 5-7 and 10 are currently pending in the Application. Applicant's arguments regarding the 35 U.S.C. 101 mental process rejection have been fully considered but they are not persuasive. The Examiner has carefully considered applicant’s arguments and respectfully disagrees. Applicant argues that the claims are no longer directed to a mental process because the amended claims recite operations involving hardware elements such as “receiving, via a roadside infra device, driving intent information of surrounding drivers of surrounding vehicles, from the surrounding vehicles; creating a local path of the vehicle, based on the created driving intent information of the driver about the vehicle, and also based on the received driving intent information of the surrounding drivers; and ... receiving, via the roadside infra device, cooperative maneuvering messages, from the surrounding vehicles.” However, merely reciting generic components such as a “roadside infra device” and generic computer functions such as “receiving” and “creating” does not transform the abstract idea into an improvement in computer functionality or another technology. These elements represent well-understood, routine, and conventional activities in the field of autonomous and cooperative driving. The claim, when considered as a whole, remains directed to creating driving intent information, creating a local path, estimating driver intent, extracting the driving intent information, creating the local path, and creating a cooperative maneuvering message, which constitutes a mental process that could be performed in the human mind using pen and paper. The additional limitations merely instruct the use of generic computing components to implement the abstract idea and therefore do not integrate the abstract idea into a practical application under Step 2A, Prong Two of the 2019 Revised Patent Subject Matter Eligibility Guidance. Moreover, the claim does not recite any additional element or combination of elements that amount to “significantly more” than the abstract idea itself under Step 2B. The recited components perform their basic functions as conventional data receivers and processors in a predictable manner, and the Applicant has not provided evidence that the combination of features yields an unconventional technical solution or an improvement to the functioning of the computer, control of the autonomous vehicle, or vehicle itself. Accordingly, the rejection under 35 U.S.C. 101 is maintained. Applicant's arguments with respect to claim(s) 1-2, 5-7 and 10 regarding the 35 U.S.C. 103 rejection have been fully considered but they are not persuasive. The Examiner has carefully considered applicant’s arguments and respectfully disagrees. Applicant argues that the cited combination of Bremkins, KATZ, and Zhang fails to teach or suggest “receiving, via a roadside infra device, driving intent information of surrounding drivers of surrounding vehicles, from the surrounding vehicles; creating a local path of the vehicle, based on the created driving intent information of the driver about the vehicle, and also based on the received driving intent information of the surrounding drivers; and ... receiving, via the roadside infra device, cooperative maneuvering messages, from the surrounding vehicles.” However, as explained in the rejection below, Zhang teaches that a vehicle’s V2X communication system exchanges information with infrastructure such as traffic lights, cameras, and signage (See paragraph [0014], [0017]), and cooperates with V2V/V2I links for autonomous driving and collision avoidance (See paragraph [0019]). Zhang further describes deriving autonomous driving information such as waypoints, trajectory, and driving mode, framing that information in a message, and sharing the message within the V2X network by way of the V2X sender. The paragraphs described the claimed “roadside infra device” and “cooperative maneuvering messages” (see paragraphs [0035], [0036].). It would have been obvious to one of ordinary skill in the art to incorporate Zhang’s infrastructure-based communication into the cooperative driving framework of Bremkens, as modified by Katz, to improve road safety, driver awareness, and vehicle communication. Accordingly, Applicant’s arguments are not persuasive, and the rejection of claims 1, 2, 5-7, and 10 under 35 U.S.C. 103 is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 5-7 and 10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. In January, 2019 (updated October 2019), the USPTO released new examination guidelines setting forth a two-step inquiry for determining whether a claim is directed to non-statutory subject matter. According to the guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that claims 1 and 6 are directed toward non-statutory subject matter, as shown below: STEP 1: Do claims 1 and 6 fall within one of the statutory categories? Yes. The claims are directed toward a method including at least one step and an apparatus. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? Yes, the claims are directed to an abstract idea. With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). Claim 1. A method of sharing driving intent between autonomous and manually driven vehicles, the method comprising: acquiring vehicle control information of a vehicle by a driver and driver behavior information of the driver, and a global path of the vehicle on the basis of input by the driver; creating driving intent information of the driver about the vehicle on the basis of the global path, the vehicle control information, and the driver behavior information; receiving, via a roadside infra device, driving intent information of surrounding drivers of surrounding vehicles, from the surrounding vehicles; creating a local path of the vehicle, based on the created driving intent information of the driver about the vehicle, and also based on the received driving intent information of the surrounding drivers; and outputting the created local path of the vehicle, wherein the creating the driving intent information comprises: estimating driving intent of the driver, including turning left, turning right, and going straight, based on a direction of the driver's gaze, a position of a head, and a dynamic movement state of the vehicle, through a Long Short-Term Memory (LSTM)-based deep learning model, wherein the creating the local path comprises: receiving, via the roadside infra device, cooperative maneuvering messages, from the surrounding vehicles; extracting the driving intent information of the surrounding drivers of the surrounding vehicles, which influence driving of the vehicle, from the received cooperative maneuvering messages; creating the local path that avoids a collision or interference with the surrounding vehicles, based on the created driving intent information of the driver about the vehicle, and also based on the received driving intent information of the surrounding drivers; creating a cooperative maneuvering message based on the created local path; and transmitting the created cooperative maneuvering message to the surrounding vehicles, and wherein the outputting the created local path comprises: outputting location information of the vehicle and the surrounding vehicles and the driving intent information of the vehicle and the surrounding vehicles in a type of a movement path; and outputting an optimal movement route of the vehicle to the driver to guide the driver to be able to follow the optimal movement path. The method in claim 1, specifically the limitations emphasized above, is a mental process that can be practicably performed in the human mind and, therefore, an abstract idea. It merely consists of creating driving intent information, creating a local path, estimating driver intent, extracting the driving intent information, creating the local path, and creating a cooperative maneuvering message. This is equivalent to a person mentally viewing the environment, making driving information, making a path, predicting driver intent, deducing driving intent information, making the path, and making a message. Notably, the claim does not positively recite any limitations regarding the execution of the path. Claim 6. An apparatus for sharing driving intent between autonomous and manually driven vehicles, the apparatus comprising: a vehicle control module unit configured to acquire vehicle control information about a vehicle by a driver; an acquiring unit configured to acquire driver behavior information of the driver; a control unit configured to: create a global path of the vehicle on the basis of input by the driver; create driving intent information of the driver about the vehicle on the basis of the global path, the vehicle control information, and the driver behavior information; and receive, via a roadside infra device, driving intent information of surrounding drivers of surrounding vehicles, from the surrounding vehicles; create a local path of the vehicle, based on the created driving intent information of the driver about the vehicle, and also based on and driving intent information of surrounding drivers of surrounding vehicles received from surrounding vehicles; and an output unit configured to output the created local path of the vehicle, wherein the control unit is further configured to: estimate driving intent of the driver, including turning left, turning right, and going straight, based on a direction of the driver's gaze, a position of a head, and a dynamic movement state of the vehicle, through a Long Short-Term Memory (LSTM)-based deep learning model; receive, via the roadside infra device, cooperative maneuvering messages, from the surrounding vehicles; extract the driving intent information of the surrounding drivers of the surrounding vehicles, which influence driving of the vehicle, from the received cooperative maneuvering messages; create the local path that avoids a collision or interference with the surrounding vehicles, based on the created driving intent information of the driver about the vehicle, and also based on the received driving intent information of the surrounding drivers; create a cooperative maneuvering message based on the created local path; and transmit the created cooperative maneuvering message to the surrounding vehicles, and wherein the output unit is further configured to: output location information of the vehicle and the surrounding vehicles and the driving intent information of the vehicle and the surrounding vehicles in a type of a movement path; and output an optimal movement route of the vehicle to the driver to guide the driver to be able to follow the optimal movement path. The method in claim 6, specifically the limitations emphasized above, is a mental process that can be practicably performed in the human mind and, therefore, an abstract idea. It merely consists of creating driving intent information, creating a local path, estimating driver intent, extracting the driving intent information, creating the local path, and creating a cooperative maneuvering message. This is equivalent to a person mentally viewing the environment, making driving information, making a path, predicting driver intent, deducing driving intent information, making the path, and making a message. Notably, the claim does not positively recite any limitations regarding the execution of the path. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? No, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. In the present case, the additional limitations beyond the above-noted abstract ideas are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the abstract “idea”). Claim 1. A method of sharing driving intent between autonomous and manually driven vehicles, the method comprising: acquiring vehicle control information of a vehicle by a driver and driver behavior information of the driver, and a global path of the vehicle on the basis of input by the driver; creating driving intent information of the driver about the vehicle on the basis of the global path, the vehicle control information, and the driver behavior information; receiving, via a roadside infra device, driving intent information of surrounding drivers of surrounding vehicles, from the surrounding vehicles; creating a local path of the vehicle, based on the created driving intent information of the driver about the vehicle, and also based on the received driving intent information of the surrounding drivers; and outputting the created local path of the vehicle, wherein the creating the driving intent information comprises: estimating driving intent of the driver, including turning left, turning right, and going straight, based on a direction of the driver's gaze, a position of a head, and a dynamic movement state of the vehicle, through a Long Short-Term Memory (LSTM)-based deep learning model, wherein the creating the local path comprises: receiving, via the roadside infra device, cooperative maneuvering messages, from the surrounding vehicles; extracting the driving intent information of the surrounding drivers of the surrounding vehicles, which influence driving of the vehicle, from the received cooperative maneuvering messages; creating the local path that avoids a collision or interference with the surrounding vehicles, based on the created driving intent information of the driver about the vehicle, and also based on the received driving intent information of the surrounding drivers; creating a cooperative maneuvering message based on the created local path; and transmitting the created cooperative maneuvering message to the surrounding vehicles, and wherein the outputting the created local path comprises: outputting location information of the vehicle and the surrounding vehicles and the driving intent information of the vehicle and the surrounding vehicles in a type of a movement path; and outputting an optimal movement route of the vehicle to the driver to guide the driver to be able to follow the optimal movement path. Claim 1 does not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. The step of “acquiring vehicle control information…” is recited at a high level of generality and amounts to mere data gathering, which is a form of insignificant extra solution activity. Further, the step of “receiving, via a roadside infra device…” is recited at a high level of generality and amounts to mere data gathering, which is a form of insignificant extra solution activity. Further, the step of “outputting the created local path…” is again recited at a high level of generality and amounts to mere post solutions actions, which is a form of extra solution activity. Further, the step of “receiving, via the roadside infra device, cooperative maneuvering messages…” is recited at a high level of generality and amounts to mere data gathering, which is a form of insignificant extra solution activity. Further, the step of “transmitting the created cooperative maneuvering message…” is again recited at a high level of generality and amounts to mere post solutions actions, which is a form of extra solution activity. Further, the step of “outputting location information…” is again recited at a high level of generality and amounts to mere post solutions actions, which is a form of extra solution activity. Further, the step of “outputting an optimal movement route…” is again recited at a high level of generality and amounts to mere post solutions actions, which is a form of extra solution activity. The limitations “…through a Long Short-Term Memory (LSTM)-based deep learning model…”, and “…via the roadside infra device…” are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. The Long Short-Term Memory (LSTM)-based deep learning model and roadside infra device merely describe how to generally “apply” the otherwise mental judgments in a generic or general purpose computing environment. The Long Short-Term Memory (LSTM)-based deep learning model and roadside infra device are recited at a high level of generality and merely automate the acquiring, creating, outputting, estimating, receiving, extracting, and transmitting steps. These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer. It should be noted that because the courts have made it clear that mere physicality or tangibility of an additional element or elements is not a relevant consideration in the eligibility analysis, the physical nature of these computer components does not affect this analysis. See MPEP 2106.05(I). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 6. An apparatus for sharing driving intent between autonomous and manually driven vehicles, the apparatus comprising: a vehicle control module unit configured to acquire vehicle control information about a vehicle by a driver; an acquiring unit configured to acquire driver behavior information of the driver; a control unit configured to: create a global path of the vehicle on the basis of input by the driver; create driving intent information of the driver about the vehicle on the basis of the global path, the vehicle control information, and the driver behavior information; and receive, via a roadside infra device, driving intent information of surrounding drivers of surrounding vehicles, from the surrounding vehicles; create a local path of the vehicle, based on the created driving intent information of the driver about the vehicle, and also based on and driving intent information of surrounding drivers of surrounding vehicles received from surrounding vehicles; and an output unit configured to output the created local path of the vehicle, wherein the control unit is further configured to: estimate driving intent of the driver, including turning left, turning right, and going straight, based on a direction of the driver's gaze, a position of a head, and a dynamic movement state of the vehicle, through a Long Short-Term Memory (LSTM)-based deep learning model; receive, via the roadside infra device, cooperative maneuvering messages, from the surrounding vehicles; extract the driving intent information of the surrounding drivers of the surrounding vehicles, which influence driving of the vehicle, from the received cooperative maneuvering messages; create the local path that avoids a collision or interference with the surrounding vehicles, based on the created driving intent information of the driver about the vehicle, and also based on the received driving intent information of the surrounding drivers; create a cooperative maneuvering message based on the created local path; and transmit the created cooperative maneuvering message to the surrounding vehicles, and wherein the output unit is further configured to: output location information of the vehicle and the surrounding vehicles and the driving intent information of the vehicle and the surrounding vehicles in a type of a movement path; and output an optimal movement route of the vehicle to the driver to guide the driver to be able to follow the optimal movement path. Claim 6 does not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. The step of “receive, via a roadside infra device…” is recited at a high level of generality and amounts to mere data gathering, which is a form of insignificant extra solution activity. Further, the step of “receive, via the roadside infra device, cooperative maneuvering messages…” is recited at a high level of generality and amounts to mere data gathering, which is a form of insignificant extra solution activity. Further, the step of “transmit the created cooperative maneuvering message…” is again recited at a high level of generality and amounts to mere post solutions actions, which is a form of extra solution activity. Further, the step of “output location information…” is again recited at a high level of generality and amounts to mere post solutions actions, which is a form of extra solution activity. Further, the step of “output an optimal movement route…” is again recited at a high level of generality and amounts to mere post solutions actions, which is a form of extra solution activity. The limitations “…via a roadside infra device…”, “…through a Long Short-Term Memory (LSTM)-based deep learning model”, “a vehicle control module unit configured to acquire…”, “an acquiring unit configured to acquire…”, “a control unit configured to…”, and “and an output unit configured to output…” are claimed generically and are operating in their ordinary capacity such that they does not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. The roadside infra device, Long Short-Term Memory (LSTM)-based deep learning model, vehicle control module unit, acquiring unit, control unit, and output unit merely describe how to generally “apply” the otherwise mental judgments in a generic or general purpose computing environment. The roadside infra device, Long Short-Term Memory (LSTM)-based deep learning model, vehicle control module unit, acquiring unit, control unit, and output unit are recited at a high level of generality and merely automate the acquiring, creating, outputting, estimating, receiving, extracting, and transmitting steps. These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer. It should be noted that because the courts have made it clear that mere physicality or tangibility of an additional element or elements is not a relevant consideration in the eligibility analysis, the physical nature of these computer components does not affect this analysis. See MPEP 2106.05(I). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No, the claims do not recite additional elements that amount to significantly more than the judicial exception. With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements: adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present. Regarding Step 2B of the 2019 PEG, independent claims 1 and 6 do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claims do not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional limitation(s) of “…via a roadside infra device…”, “…through a Long Short-Term Memory (LSTM)-based deep learning model…”, “a vehicle control module unit configured to acquire…”, “an acquiring unit configured to acquire…”, “a control unit configured to…”, and “and an output unit configured to output…” is/are merely means to apply the exception and do not amount to “significantly more”, as adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984, are not sufficient to amount to significantly more than the judicial exception. Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The additional limitations of “acquiring vehicle control information…”, “receiving, via a roadside infra device…”, “outputting the created local path…”, “receiving, via a roadside infra device, cooperative maneuvering messages…”, “transmitting the created cooperative maneuvering message…”, “outputting location information…”, “outputting an optimal movement route…”, “receive, via a roadside infra device…”, “receive, via a roadside infra device, cooperative maneuvering messages…”, “transmit the created cooperative maneuvering message…”, “output location information…”, “output an optimal movement route…” are well-understood, routine, and conventional activities because the specification does not provide any indication that the acquiring, outputting, receiving, and transmitting steps are performed using anything other than a conventional computer. See also MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures |, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TL! Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and O/P Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere performance of an action is a well-understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Hence, the claim is not patent eligible. CONCLUSION Thus, since claims 1 and 6 are: (a) directed toward an abstract idea, (b) does not recite additional elements that integrate the judicial exception into a practical application, and (c) does not recite additional elements that amount to significantly more than the judicial exception, it is clear that claims 1 and 6 are directed towards non-statutory subject matter. Dependent claims 2, 5, 7, and 10 further limit the abstract idea without integrating the abstract idea into practical application or adding significantly more, such as the limitation in claim 10 that amounts to insignificant extra solution activity using a similar analysis applied to claim 1 above. As such, claims 1-10 are rejected under 35 USC 101 as being drawn to an abstract idea without significantly more, and thus are ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-2, 5-7 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bremkens (US 20180244275 A1) in view of KATZ (US 20220203996 A1) and Zhang (US 20220232357 A1). Regarding Claim 1, Bremkens teaches A method of sharing driving intent between autonomous and manually driven vehicles, the method comprising: acquiring vehicle control information of a vehicle by a driver and driver behavior information of the driver (See at least paragraph [0031], “The computer 110 may operate the respective vehicle 100 in an autonomous, a semi-autonomous mode, or a non-autonomous (or manual) mode”, paragraph [0039], “The HMI 140 may be configured to receive input from a human operator during operation of the vehicle 100. Moreover, an HMI 140 may be configured to display, e.g., via visual and/or audio output, information to the user…In an example non-autonomous mode, the computer 110 may receive a request to change the lane 210, e.g., a turn left signal to indicate an intention of vehicle 100 user to change from a current lane 210a to a target lane 210b” and paragraph [0042], “In another example, the computer 110 may be programmed to determine that a lane change is needed to prevent a collision of the host vehicle 100 with an obstacle on the host vehicle 100 current lane 210. For example, the computer 110 may be programmed to determine that a lane change needed upon determining that a time-to-collision with an obstacle on the current lane 210 of the host vehicle 100 is less than a time to stop the vehicle 100 by braking. In this example, the computer 110 could be programed to determine the time-to-collision based on a speed and acceleration of the host vehicle 100, and a distance between the host vehicle 100 and the obstacle.”), and a global path of the vehicle on the basis of input by the driver (See at least paragraph [0041], “The computer 110 may be programmed to receive a destination, e.g., location coordinates, via the HMI 140, and determine a route from a current location of the vehicle 100 to the received destination. The computer 110 may be programmed to operate the vehicle 100 in an autonomous mode from the current location to the received destination based on the determined route.”); creating driving intent information of the driver about the vehicle on the basis of the global path, the vehicle control information, and the driver behavior information (See at least paragraph [0039], “The HMI 140 may be configured to receive input from a human operator during operation of the vehicle 100. Moreover, an HMI 140 may be configured to display, e.g., via visual and/or audio output, information to the user…In an example non-autonomous mode, the computer 110 may receive a request to change the lane 210, e.g., a turn left signal to indicate an intention of vehicle 100 user to change from a current lane 210a to a target lane 210b” and paragraph [0042], “The computer 110 may be programmed to receive a destination, e.g., location coordinates, via the HMI 140, and determine a route from a current location of the vehicle 100 to the received destination. The computer 110 may be programmed to operate the vehicle 100 in an autonomous mode from the current location to the received destination based on the determined route…the computer 110 could be programed to determine the time-to-collision based on a speed and acceleration of the host vehicle 100, and a distance between the host vehicle 100 and the obstacle.”); creating a local path of the vehicle, based on the created driving intent information of the driver about the vehicle, and also based on the received driving intent information of the surrounding drivers (See at least paragraph [0074], “In the block 430, the computer 110 determines one or more trajectories 230 based on a location and/or speed of the vehicle 100, a location and/or speed of other vehicles 101 proximate to the vehicle 100, target lane 210, etc.”, paragraph [0075], “Continuing with the process 400 as illustrated in FIG. 4B, next, in a decision block 435, the computer 110 determines if at least one of the possible determined trajectories is not blocked. The computer 110 may determine that a trajectory is blocked upon determining that, without a modification of a second vehicle 101 speed, the vehicles 100, 101 may impact one another, i.e., collide (see FIG. 2B)”, and paragraph [0078], “As discussed with reference to blocks 445, 450, 455, and 460, the computer 110 may be programmed to evaluate one by one each of, e.g., the trajectories 230, 230a, 230b, to determine which of the trajectories 230, 230a, 230b can be unblocked based on a refusal or acceptance reply of the second vehicles 101. The computer 110 may be programmed, as discussed above, to select a preferred trajectory, e.g., the trajectory with lowest W, and if the second vehicles 101 associated with that trajectory (i.e., the second vehicles 101 which are instructed to modify their speed) refuses to follow the instruction, the computer 110 may be programmed to select other determined trajectories 230, 230a, 230b.”); and outputting the created local path of the vehicle (See at least paragraph [0080], “Next, in a block 450, the computer 110 sends one or more instructions to the selected second vehicle(s) 101. For example, with reference to FIGS. 2A-2B, the computer 110 may be programed to send an instruction to the second vehicle 101b to reduce the second vehicle 101b speed. The instruction may include a target speed, e.g., 35 km/hr, and an identifier such as vehicle identification number (VIN), license plate number, etc., of the selected second vehicle 101b. Additionally or alternatively, the computer 110 may be programmed to send a target speed pattern such as a pattern shown in FIG. 3 to the selected second vehicle 101b” and paragraph [0089], “In the block 490, the vehicle 100 having been determined to be in the manual mode, the computer 110 prompts the vehicle 100 user to change the lane. For example, the computer 110 may actuate the vehicle 100 HMI 140 to output information, e.g., a blinking green lamp, to indicate that the user may change the lane 210 in the proposed direction such as the direction of received turn signal. The block 490 may be reached when the vehicle 100 is operated in the non-autonomous mode. Thus, the vehicle 100 user may actuate a vehicle 100 actuator 120 to change the lane 210.”). Bremkens does not explicitly disclose, however, KATZ, in the same field of endeavor, teaches wherein the creating the driving intent information comprises: estimating driving intent of the driver, including turning left, turning right, and going straight, based on a direction of the driver's gaze, a position of a head, and a dynamic movement state of the vehicle, through a Long Short-Term Memory (LSTM)-based deep learning model (See at least paragraph [0070], “the machine learning algorithm may use information related to current or future driving circumstances to determine a required level of control over the vehicle. Current or future driving circumstances, for example, may include one or more road-related parameters or environmental conditions (such as a number of holes in the road and the level of risk the holes introduce), information associated with surrounding vehicles (such as vehicles that are within the driver's sensing capabilities, vehicles that are networked or in other types of communication with one another, vehicles that transmit location information and other data), proximate events taking place on the road (such as a vehicle crossing over a car on the opposite lane), weather conditions, and/or visual hazards”, paragraph [0079], “a deep recurrent long short-term memory (LSTM) network may be used to anticipate a vehicle driver's/operator's behavior, or predict their actions before it happens, based on a collection of sensor data from one or more sensors configured to collect images such as video data, tactile feedback, and location data such as from a global positioning system (GPS)”, paragraph [0084], “Machine learning components can be used to detect the occupancy of a vehicle's cabin, detecting and tracking people and objects, and acts according to their presence, position, pose, identity, age, gender, physical dimensions, state, emotion, health, head pose, gaze, gestures, facial features and expressions”, paragraph [0085], “Machine learning components can be used to detect or predict features associated with user's body parts such as hands, user behavior, action, interaction with the environment, interaction with another person, activity, emotional state, emotional responses to: content, event, trigger another person, one or more object, detecting child presence in the car after all adults left the car, monitoring back-seat of a vehicle, identifying aggressive behavior, vandalism, vomiting, physical or mental distress, detecting actions such as smoking, eating and drinking, understanding the intention of the user through their gaze or other body features. In some embodiments, the user's behaviors, actions or attention may be correlated to the user's gaze direction or detected change in gaze direction”, paragraph [0202], “the system may receive second information from, for example, the second sensor and determine whether the individual is authorized based at least in part on the second information. In some embodiments, second information may be associated with the interior of the vehicle. In other embodiments, second information may be associated with the device. Second information may comprise, for example, second sensor data associated with types of sensors disclosed herein, such as a microphone, a light sensor, an infrared sensor, an ultrasonic sensor, a proximity sensor, a reflectivity sensor, a photosensor, an accelerometer, or a pressure sensor. In some embodiments, second information associated with a microphone may include a voice or a sound pattern associated with one or more individuals in the vehicle. In some embodiments, second information may include data associated with the vehicle such as a speed, acceleration, rotation, movement, or operating status of the vehicle…In some embodiments second information associated with the vehicle may include information regarding the presence, behavior, or condition of surrounding vehicles”, and paragraph [0209], “As illustrated in FIG. 9, an exemplary dynamic or pattern A1-A9 of gaze that is associated with changing a lane is illustrated. A1 represents the location of the driver's gaze when the driver looks ahead, A2 represents the location of the driver's gaze when the driver's gaze changes to the mirror, A3 represents the location of the driver's gaze when the driver is looking back ahead, A4 represents the location of the driver's gaze when the driver is looking to the back mirror, A5 represents the location of the driver's gaze when the driver is looking at the right mirror, A6 represents the location of the driver's gaze when the driver is looking at the car in front of the vehicle, A7 represents the location of the driver's gaze when the driver is again looking ahead, A8 represents the location of the driver's gaze when the driver is looking back at the desired lane, and A9 represents the location of the driver's gaze when the driver is looking back ahead. Together, A1-A9 represents the dynamic or pattern of the driver's change in gaze that is associated with the driver attempting to change lanes on the road…Other dynamics may be associated with the weather, visibility conditions, environmental conditions, or the like. Additionally, or alternatively, dynamics may be associated with the movement or dynamic of movement of other vehicles on the road, the density of vehicles, the speed of other vehicles, the change of speed of other vehicles, the direction or change in direction of other vehicles, or the like.”); extracting the driving intent information of the surrounding drivers of the surrounding vehicles, which influence driving of the vehicle, from the received cooperative maneuvering messages (See at least paragraph [0070], “the machine learning algorithm may use information related to current or future driving circumstances to determine a required level of control over the vehicle. Current or future driving circumstances, for example, may include one or more road-related parameters or environmental conditions (such as a number of holes in the road and the level of risk the holes introduce), information associated with surrounding vehicles (such as vehicles that are within the driver's sensing capabilities, vehicles that are networked or in other types of communication with one another, vehicles that transmit location information and other data), proximate events taking place on the road (such as a vehicle crossing over a car on the opposite lane), weather conditions, and/or visual hazards”, paragraph [0202], “the system may receive second information from, for example, the second sensor and determine whether the individual is authorized based at least in part on the second information. In some embodiments, second information may be associated with the interior of the vehicle. In other embodiments, second information may be associated with the device. Second information may comprise, for example, second sensor data associated with types of sensors disclosed herein, such as a microphone, a light sensor, an infrared sensor, an ultrasonic sensor, a proximity sensor, a reflectivity sensor, a photosensor, an accelerometer, or a pressure sensor. In some embodiments, second information associated with a microphone may include a voice or a sound pattern associated with one or more individuals in the vehicle. In some embodiments, second information may include data associated with the vehicle such as a speed, acceleration, rotation, movement, or operating status of the vehicle…In some embodiments second information associated with the vehicle may include information regarding the presence, behavior, or condition of surrounding vehicles”, and paragraph [0209], “As illustrated in FIG. 9, an exemplary dynamic or pattern A1-A9 of gaze that is associated with changing a lane is illustrated. A1 represents the location of the driver's gaze when the driver looks ahead, A2 represents the location of the driver's gaze when the driver's gaze changes to the mirror, A3 represents the location of the driver's gaze when the driver is looking back ahead, A4 represents the location of the driver's gaze when the driver is looking to the back mirror, A5 represents the location of the driver's gaze when the driver is looking at the right mirror, A6 represents the location of the driver's gaze when the driver is looking at the car in front of the vehicle, A7 represents the location of the driver's gaze when the driver is again looking ahead, A8 represents the location of the driver's gaze when the driver is looking back at the desired lane, and A9 represents the location of the driver's gaze when the driver is looking back ahead. Together, A1-A9 represents the dynamic or pattern of the driver's change in gaze that is associated with the driver attempting to change lanes on the road…Other dynamics may be associated with the weather, visibility conditions, environmental conditions, or the like. Additionally, or alternatively, dynamics may be associated with the movement or dynamic of movement of other vehicles on the road, the density of vehicles, the speed of other vehicles, the change of speed of other vehicles, the direction or change in direction of other vehicles, or the like.”). Bremkens and KATZ do not explicitly disclose, however, Zhang, in the same field of endeavor, teaches receiving, via a roadside infra device, driving intent information of surrounding drivers of surrounding vehicles, from the surrounding vehicles (See at least paragraph [0014], “FIG. 1 is a block diagram of a vehicle-to-everything (V2X) system 100 wherein vehicle 102 has a vehicle computing system 200 that has the capability to communicate vehicle-to-vehicle (V2V) 104, vehicle-to-pedestrian (V2P) 106, vehicle-to-
Read full office action

Prosecution Timeline

Dec 21, 2022
Application Filed
Sep 06, 2024
Non-Final Rejection — §101, §103
Dec 16, 2024
Response Filed
Apr 01, 2025
Final Rejection — §101, §103
Oct 09, 2025
Request for Continued Examination
Oct 16, 2025
Response after Non-Final Action
Oct 18, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578195
INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12565204
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12542012
TEST SYSTEM, CONTROL DEVICE, TEST METHOD, AND TEST SYSTEM PROGRAM
2y 5m to grant Granted Feb 03, 2026
Patent 12523490
Systems and Methods for Vehicle Navigation
2y 5m to grant Granted Jan 13, 2026
Patent 12518631
Vehicle Scheduling Method, Electronic Equipment and Storage Medium
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
80%
With Interview (+7.9%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month