Prosecution Insights
Last updated: April 19, 2026
Application No. 18/411,621

SYSTEMS AND METHODS FOR PROVIDING CONTEXTUALLY RELEVANT VEHICLE INSTRUCTIONS

Final Rejection §101§102§103
Filed
Jan 12, 2024
Examiner
ANTOINE, LISA HOPE
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Ford Global Technologies LLC
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 15 resolved
-70.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
48 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
21.8%
-18.2% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
25.6%
-14.4% vs TC avg
§112
2.3%
-37.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This is a Final Office action in response to communications filed on January 6, 2026. Applicant amended claims 1, 6, 8, 13, and 19-20. Applicant cancelled claims 7 and 18. Claims 1-6, 8-17, and 19-20 remain pending in this application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6, 8-17, and 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Does the claimed invention fall inside one of the four statutory categories (process, machine, manufacture, or composition of matter)? Yes for claims 1-6, 8-17, and 19-20. Claims 1-6 and 8-12 are drawn to a system for outputting vehicle instructions (i.e., a manufacture). Claims 13-17 and 19 are drawn to a method for outputting vehicle instructions (i.e., a process). Claim 20 is drawn to a medium for storing instructions and a processor to execute the vehicle instructions (i.e., a manufacture). Step 2A - Prong One: Do the claims recite a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon)? Yes, for claims 1-6, 8-17, and 19-20. Claim 1 recites: A system to output vehicle instructions, the system comprising: a transceiver configured to receive a user input associated with a vehicle from a user; a memory configured to store (i) a trained machine model, wherein the trained machine model is trained using a training data that comprises a plurality of vehicle component identifiers and a plurality of user command intents and (ii) a three-dimensional (3D) digital vehicle interior and exterior model; and a processor communicatively coupled with the transceiver and the memory, wherein the processor is configured to: obtain the user input from the transceiver; determine a user intent based on the user input; identify a vehicle component associated with the user intent by executing instructions stored in the trained machine model; generate an instructional media content based on the vehicle component and the user intent; and fetch the 3D digital vehicle interior and exterior model from the memory; and display the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model. These steps amount to a form of mental process and organizing human activity (i.e., an abstract idea) because a human can obtain user input associated with a vehicle, determine user intent based on the user input, identify a vehicle component associated with the user intent, and then determine instructions based on user intent and the corresponding vehicle component. Applicant of claimed invention discloses “There are known instances of users seeking assistance when they need to repair their vehicles or need guidance to operate one or more vehicle components … the users … review the owner manuals or search for “how-to” videos on the Internet to obtain the required assistance.” [0002]. Independent claims 13 and 20 describe nearly identical steps as claim 1 (and therefore recite limitations that fall within this subject matter of grouping abstract ideas), and these claims are therefore determined to recite an abstract idea under the same analysis. Dependent claims 2-6, 8-12, 14-17, and 19 are directed towards mini-tasks (determining user intent based on analysis type (sentiment, emotion, or age), providing instructions to perform tasks (installing vehicle component, operating vehicle component, replacing vehicle component, repairing vehicle component, or vehicle component maintenance), and obtaining a photograph of the vehicle interior and exterior, etc.) for a system to output vehicle instructions. Each claim amounts to a form of collecting, generating, and analyzing information, and therefore falls within the scope of a method for organizing human activity, (i.e., an abstract idea). As such, the Examiner concludes that claims 2-6, 8-12, 14-17, and 19 recite an abstract idea. Step 2A – Prong Two: Do the claims recite additional elements that integrate the exception into a practical application of the exception? No In prong two of step 2A, an evaluation is made whether a claim recites any additional element, or combination of additional elements, that integrate the exception into a practical application of that exception. An “additional element” is an element that is recited in the claim in addition to (beyond) the judicial exception (i.e., an element/limitation that sets forth an abstract idea is not an additional element). The phrase “integration into a practical application” is defined as requiring an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception. The requirement to execute the claimed steps/functions using computing devices (independent claims 1, 13, and 20 and dependent claims 2-6, 8-12, 14-17, and 19) is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. Similarly, the limitations of transceivers, memory/storage medium, and processors (independent claims 1, 13, and 20 and dependent claims 2-6, 8-12, 14-17, and 19) are recited at a high level of generality and amount to no more than mere instructions to apply the exception using generic computer components. These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application (see MPEP 2106.05(f)). Use of a computer, processor, memory or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015) (See MPEP 2106.05(f)). Further, the additional limitations beyond the abstract idea identified above, serve merely to generally link the use of the judicial exception to a particular technological environment or field of use. Specifically, they serve to limit the application of the abstract idea to a computerized environment (e.g., identifying and displaying, etc.) performed by a computing device, processor, and memory, etc. This reasoning was demonstrated in Intellectual Ventures I LLC v. Capital One Bank (Fed. Cir. 2015), where the court determined "an abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment, such as the Internet [or] a computer"). These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application (see MPEP 2106.05(h)). Dependent claims 2-6, 8-12, 14-17, and 19 fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims are further part of the abstract idea as identified by the Examiner for each respective independent claim (i.e., they are part of the abstract idea recited in each respective claim). The Examiner has therefore determined that the additional elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea. Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? i.e., Are there any additional elements (features/limitations/step) recited in the claim beyond the abstract idea? No In step 2B, the claims are analyzed to determine whether any additional element, or combination of additional elements, are sufficient to ensure that the claims amount to significantly more than the judicial exception. This analysis is also termed a search for an “inventive concept.” An “inventive concept” is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim as a whole amount to significantly more than the judicial exception itself. Alice Corp., 573 U.S. at 27-18, 110 USPQ2d at 1981 (citing Mayo, 566 U.S. at 72-73, 101 USPQ2d at 1966). As discussed above in “Step 2A – Prong Two”, the identified additional elements in independent claims 1, 13, and 20 and dependent claims 2-12 and 14-19 are equivalent to adding the words “apply it” on a generic computer, and/or generally link the use of the judicial exception to a particular technological environment or field of use. Therefore, the claims as a whole do not amount to significantly more than the judicial exception itself. Viewing the additional limitations in combination also shows that they fail to ensure the claims amount to significantly more than the abstract idea. When considered as an ordered combination, the additional components of the claims add nothing that is not already present when considered separately, and thus simply append the abstract idea with words equivalent to “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer or/and append the abstract idea with insignificant extra solution activity associated with the implementation of the judicial exception, (e.g., mere data gathering, post-solution activity) and/or simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. Dependent claims 2-6, 8-12, 14-17, and 19 fail to include any additional elements. In other words, each of the limitations/elements recited in respective independent claims are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim). The Examiner has therefore determined that no additional element, or combination of additional claims elements are sufficient to ensure the claims amount to significantly more than the abstract idea identified above. Therefore, claims 1-6, 8-17, and 19-20 are not eligible subject matter under 35 USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, 8-17, and 18-20 are rejected under 35 U.S.C. 102 as being unpatentable under US 20240198937 A1 (“Benqassmi”) in view of US 20180300882 A1 (“Kim”). In regards to claim 1, Benqassmi discloses the following limitations with the exception of the underlined limitations. A system to output vehicle instructions, the system comprising ([0020], “The computer-implemented method may include outputting … instructions for the vehicle”): a transceiver ([0228], “The computing system … may include … a … transceiver”) configured to receive a user input associated with a vehicle from a user ([0096], “The user … may interact with a vehicle … through user input”); a memory configured to store ([0011], “the control circuit may be … configured to store … instructions in an accessible memory on the vehicle”) (i) a trained machine model, wherein the trained machine model is trained using a training data that comprises a plurality of vehicle component identifiers and a plurality of user command intents ([0004], “The machine-learned … model may be configured to identify … user-activity … based on … user-selected settings and … observed conditions associated with the … user-selected settings.”) and (ii) a three-dimensional (3D) digital vehicle interior and exterior model; and a processor communicatively coupled with ([0216]-[0217], “The system … includes … a computing system … and a training computing system … that are communicatively coupled … the computing system … may include … processors”) the transceiver and the memory, wherein the processor is configured to: obtain the user input from the transceiver ([0258]-[0259], “the model … may include … files … loaded into memory and executed by … processors … the communication interfaces … may include … a … transceiver … for communicating data”); determine a user intent based on the user input ([0145], “The … model … may be configured to convert the user-activity … into … vehicle actions … that mimic the patterns of user activity.”); identify a vehicle component associated with the user intent by executing instructions stored in the trained machine model ([0123], “The machine ... model ... may process the ... data ... to generate a user-activity ... based on the user interactions with a particular vehicle function ... Using the machine ... model ... may include ...component analysis”); generate an instructional media content based on the vehicle component and the user intent ([0047], “the computing system may request that a user approve the … vehicle action”); and fetch the 3D digital vehicle interior and exterior model from the memory ([0079], “the computing system … may retrieve … the information from the … computer-readable medium” Examiner notes that memory is a type of computer-readable medium.); and display the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model ([0064], “The … device … may include … a … display screen”). Kim discloses and (ii) a three-dimensional (3D) digital vehicle interior and exterior model, ([0029], “the three-dimensional digital model … comprises a digital … car”): Benqassmi and Kim combined are considered analogous to the claimed invention because they are in the field of machine-learning models for generating vehicle actions and methods for manipulating a three-dimensional digital model. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system to output vehicle instructions, the system comprising: a transceiver configured to receive a user input associated with a vehicle from a user; a memory configured to store (i) a trained machine model, wherein the trained machine model is trained using a training data that comprises a plurality of vehicle component identifiers and a plurality of user command intents; and a processor communicatively coupled with the transceiver and the memory, wherein the processor is configured to: obtain the user input from the transceiver; determine a user intent based on the user input; identify a vehicle component associated with the user intent by executing instructions stored in the trained machine model; generate an instructional media content based on the vehicle component and the user intent; and fetch the 3D digital vehicle interior and exterior model from the memory; and display the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model, as disclosed by Benqassmi, and (ii) a three-dimensional (3D) digital vehicle interior and exterior model, as disclosed by Kim, to provide a three-dimensional digital car model for a method and system that identifies and manipulates a segment of a three-dimensional digital model based on soft classification. In regards to claim 2, Benqassmi discloses wherein the user input is a natural language voice command ([0094], “the user … may provide a voice command”). In regards to claim 3, Benqassmi discloses wherein the memory is further configured to store instructions associated with a natural language processing algorithm ([0220], “The … medium … may … store … instructions … that may be … written in any suitable programming language” Examiner notes that a natural language processing algorithm is implemented using programming languages.), and wherein the processor determines the user intent based on the user input by executing the instructions associated with the natural language processing algorithm ([0257]-[0258], “if the user has provided … authorization … training … may be provided by the computing system … of the user's vehicle … a model … provided to the computing system … may be trained … the … trainer … may include … files … executed by … processors … the … trainer … may include one or more sets of computer-executable instructions”). In regards to claim 4, Benqassmi discloses wherein the processor is further configured to determine the user intent ([0258], “the model … may include … files … loaded into memory and executed by … processors”) by performing at least one of a sentiment analysis, an emotion analysis and an age analysis of the user input by executing the instructions associated with the natural language processing algorithm ([0103], “vehicle actions may include coordinating … conflict analysis” Examiner notes that conflict analysis is directly related to emotion analysis (e.g., joy and sadness at the same time), and analyzing conflict often involves analyzing the associated emotions.). In regards to claim 5, Benqassmi discloses wherein the instructional media content is a video content comprising instructions ([0090], “The … system may … provide guidance to the user … via a display device” Examiner notes that a display device can display video content.) to perform one or more of installing the vehicle component, operating the vehicle component, replacing the vehicle component, repairing the vehicle component, or a vehicle component maintenance ([0069], “The display device may display a variety of content to the user … including information about the vehicle … , prompts for user input, etc.”). In regards to claim 6, Benqassmi discloses wherein the display screen associated with a vehicle Human-Machine Interface (HMI) or a user device ([0058], “The … platform may …include … the user device”). In regards to claim 8, Benqassmi discloses the following limitations with the exception of the underlined limitations. wherein the processor is further configured to: determine an optimal view angle associated with the 3D digital vehicle interior and exterior model to display the instructional media content, based on the vehicle component and the user intent ([0090], “The … system may … provide guidance to the user … via a display device” Examiner notes that a display device can display media content.); cause the display screen ([0064], “The … device … may include … a … display screen”) to rotate a default view angle associated with the 3D digital vehicle interior and exterior model being displayed on the display screen to the optimal view angle; and cause the display screen to display the instructional media content responsive to rotating the default view angle to the optimal view angle ([0064], “The … device … may include … a … display screen”). Kim discloses wherein the processor is further configured to: determine an optimal view angle associated with ([0154], “the digital segmentation system … can comprise … a processor” Examiner notes that results from a digital segmentation system can be used to calculate an optimal view angle.) the 3D digital vehicle interior and exterior model ([0029], “the three-dimensional digital model … comprises a digital … car”) to rotate a default view angle associated with the 3D digital vehicle interior and exterior model being displayed on the display screen to the optimal view angle ([0154], “the digital segmentation system … can comprise … a processor” Examiner notes that a digital segmentation system can be used to rotate a default view angle.); Benqassmi and Kim combined are considered analogous to the claimed invention because they are in the field of machine-learning models for generating vehicle actions and methods for manipulating a three-dimensional digital model. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system to output vehicle instructions, the system comprising: a transceiver configured to receive a user input associated with a vehicle from a user; a memory configured to store a trained machine model, wherein the trained machine model is trained using a training data that comprises a plurality of vehicle component identifiers and a plurality of user command intents; and a processor communicatively coupled with the transceiver and the memory, wherein the processor is configured to: obtain the user input from the transceiver; determine a user intent based on the user input; identify a vehicle component associated with the user intent by executing instructions stored in the trained machine model; generate an instructional media content based on the vehicle component and the user intent; and output the instructional media content, wherein the processor outputs the instructional media content by displaying the instructional media content on a display screen associated with a vehicle Human-Machine Interface (HMI) or a user device, wherein the memory is further configured to store, fetch the 3D digital vehicle interior and exterior model from the memory; and display the instructional media content on the display screen by using the 3D digital vehicle interior and exterior model, to display the instructional media content, based on the vehicle component and the user intent; cause the display screen, and cause the display screen to display the instructional media content responsive to rotating the default view angle to the optimal view angle, as disclosed by Benqassmi, a three-dimensional (3D) digital vehicle interior and exterior model, and wherein the processor is further configured to, wherein the processor is further configured to: determine an optimal view angle associated with, the 3D digital vehicle interior and exterior model, to rotate a default view angle associated with the 3D digital vehicle interior and exterior model being displayed on the display screen to the optimal view angle, as disclosed by Kim, to provide a processor and a three-dimensional digital car model for a method and system that identifies and manipulates a segment of a three-dimensional digital model based on soft classification. In regards to claim 9, Benqassmi discloses wherein the memory is further configured to store ([0011], “the control circuit may be … configured to store … instructions in an accessible memory on the vehicle”) information associated with a user manual of the vehicle, and ([0042], “the computing system may store ... models … associated with a particular vehicle function” Examiner notes that vehicle functions are listed in the users’ manual, which provides instructions for the vehicle's systems, from basic controls to advanced technology.) wherein the processor is further configured to generate ([0258], “the model … may include … files … loaded into memory and executed by … processors”) the instructional media content based on the information associated with the user manual ([0004], “The control circuit may be configured to generate … a user-activity cluster for the vehicle function based on … data.”). In regards to claim 10, Benqassmi discloses wherein the transceiver is further configured to receive sensor inputs from a vehicle sensor unit, and wherein the sensor inputs comprise inputs associated with at least one of a vehicle operating status, a vehicle speed, a vehicle geolocation, or a weather condition associated with a vehicle surrounding ([0008], “the control circuit may be … configured to receive, via a plurality of sensors …, data indicative of the one or more observed conditions … The observed conditions may include … a temperature when the … user interaction with the vehicle function occurred” Examiner notes that a transceiver communicates with a control circuit and that the observed condition, vehicle surrounding weather conditions, may include temperature.). In regards to claim 11, Benqassmi discloses wherein the processor is further configured to ([0258], “the model … may include … files … executed by … processors”): obtain the sensor inputs from the transceiver ([0008], “the control circuit may be … configured to receive, via a plurality of sensors …, data” Examiner notes that a transceiver communicates with a control circuit.); and generate the instructional media content ([0172], “The computing system … may output command instructions”) based on the sensor inputs ([0082]-[0083], “The vehicle … may include a positioning system … configured to generate position data … the positioning system … may determine position by using … sensors”). In regards to claim 12, Benqassmi discloses wherein the system is part of the vehicle ([0073], “The vehicle … may include a … system … that is onboard the vehicle”). In regards to claim 13, Benqassmi discloses A method to output vehicle instructions, the method comprising ([0020], “The computer-implemented method may include outputting … instructions for the vehicle”): obtaining, by a processor, a user input associated with a vehicle from a user ([0180], “the computing platform … may obtain data indicative of … vehicle actions … generated by … a plurality of different users.” Examiner notes that the computing platform includes processors.); determining, by the processor, a user intent based on the user input ([0145], “The … model … may be configured to convert the user-activity … into … vehicle actions … that mimic the patterns of user activity.” Examiner notes that a model uses processors.); identifying, by the processor, a vehicle component associated with the user intent by executing instructions stored in a trained machine model, wherein the trained machine model is trained using a training data that comprises a plurality of vehicle component identifiers and a plurality of user command intents ([0123], “The machine ... model ... may process the ... data ... to generate a user-activity ... based on the user interactions with a particular vehicle function ... Using the machine ... model ... may include ...component analysis”); generating, by the processor, an instructional media content based on the vehicle component and the user intent ([0123], “The machine ... model ... may process the ... data ... to generate a user-activity ... based on the user interactions with a particular vehicle function”); and fetching a three-dimensional (3D) digital vehicle interior and exterior model ([0079], “the computing system … may retrieve … the information from the … computer-readable medium” Examiner notes that memory is a type of computer-readable medium.); and displaying the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model ([0064], “The … device … may include … a … display screen”). In regards to claim 14, Benqassmi discloses wherein the user input is a natural language voice command ([0094], “the user … may provide a voice command”). In regards to claim 15, Benqassmi discloses wherein determining the user intent comprises determining the user intent based on the user input by executing instructions associated with ([0257]-[0258], “if the user has provided … authorization … training … may be provided by the computing system … of the user's vehicle … a model … provided to the computing system … may be trained … the … trainer … may include one or more sets of computer-executable instructions”) a natural language processing algorithm ([0220], “The … medium … may … store … instructions … that may be … written in any suitable programming language” Examiner notes that a natural language processing algorithm is implemented using programming languages.). In regards to claim 16, Benqassmi discloses wherein determining the user intent further comprises determining the user intent by performing at least one of a sentiment analysis, an emotion analysis or an age analysis of the user input by executing the instructions associated with the natural language processing algorithm ([0103], “vehicle actions may include coordinating … conflict analysis” Examiner notes that conflict analysis is directly related to emotion analysis (e.g., joy and sadness at the same time), and that analyzing conflict often involves analyzing the associated emotions.). In regards to claim 17, Benqassmi discloses wherein outputting the instructional media content comprises displaying the instructional media content ([0090], “The … system may … provide guidance to the user … via a display device” Examiner notes that a display device can display media content.) on a display screen associated with a vehicle Human-Machine Interface (HMI) or a user device ([0058], “The … platform may …include … the user device”). In regards to claim 19, Benqassmi discloses the following limitations with the exception of the underlined limitations. further comprising: determining an optimal view angle associated with the 3D digital vehicle interior and exterior model to display the instructional media content, based on the vehicle component and the user intent ([0090], “The … system may … provide guidance to the user … via a display device” Examiner notes that a display device can display media content.); causing the display screen ([0064], “The … device … may include … a … display screen”) to rotate a default view angle associated with the 3D digital vehicle interior and exterior model being displayed on the display screen to the optimal view angle; and causing the display screen to display the instructional media content responsive to rotating the default view angle to the optimal view angle ([0064], “The … device … may include … a … display screen”). Kim discloses further comprising: determining an optimal view angle associated with ([0154], “the digital segmentation system … can comprise … a processor” Examiner notes that results from a digital segmentation system can be used to calculate an optimal view angle.) the 3D digital vehicle interior and exterior model ([0029], “the three-dimensional digital model … comprises a digital … car”) to rotate a default view angle associated with the 3D digital vehicle interior and exterior model being displayed on the display screen to the optimal view angle ([0154], “the digital segmentation system … can comprise … a processor” Examiner notes that a digital segmentation system can be used to rotate a default view angle.); Benqassmi and Kim combined are considered analogous to the claimed invention because they are in the field of machine-learning models for generating vehicle actions and methods for manipulating a three-dimensional digital model. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system to output vehicle instructions, the system comprising: a transceiver configured to receive a user input associated with a vehicle from a user; a memory configured to store a trained machine model, wherein the trained machine model is trained using a training data that comprises a plurality of vehicle component identifiers and a plurality of user command intents; and a processor communicatively coupled with the transceiver and the memory, wherein the processor is configured to: obtain the user input from the transceiver; determine a user intent based on the user input; identify a vehicle component associated with the user intent by executing instructions stored in the trained machine model; generate an instructional media content based on the vehicle component and the user intent; and output the instructional media content, wherein the processor outputs the instructional media content by displaying the instructional media content on a display screen associated with a vehicle Human-Machine Interface (HMI) or a user device, wherein the memory is further configured to store, fetch the 3D digital vehicle interior and exterior model from the memory; and display the instructional media content on the display screen by using the 3D digital vehicle interior and exterior model, to display the instructional media content, based on the vehicle component and the user intent; causing the display screen, and causing the display screen to display the instructional media content responsive to rotating the default view angle to the optimal view angle, as disclosed by Benqassmi, a three-dimensional (3D) digital vehicle interior and exterior model, and further comprising: determining an optimal view angle associated with the 3D digital vehicle interior and exterior model, to rotate a default view angle associated with the 3D digital vehicle interior and exterior model being displayed on the display screen to the optimal view angle; as disclosed by Kim, to provide a processor and a three-dimensional digital car model for a method and system that identifies and manipulates a segment of a three-dimensional digital model based on soft classification. In regards to claim 20, Benqassmi discloses the following limitations with the exception of the underlined limitations. a non-transitory computer-readable storage medium having instructions stored thereupon which ([0020], “The computer-implemented method may include outputting … instructions for the vehicle”), when executed by a processor, cause the processor to: obtain a user input associated with a vehicle from a user ([0180], “the computing platform … may obtain data indicative of … vehicle actions … generated by … a plurality of different users.” Examiner notes that the computing platform includes processors.); determine a user intent based on the user input ([0145], “The … model … may be configured to convert the user-activity … into … vehicle actions … that mimic the patterns of user activity.” Examiner notes that a model uses processors.); identify a vehicle component associated with the user intent by executing instructions stored in a trained machine model, wherein the trained machine model is trained using a training data that comprises a plurality of vehicle component identifiers and a plurality of user command intents ([0123], “The machine ... model ... may process the ... data ... to generate a user-activity ... based on the user interactions with a particular vehicle function ... Using the machine ... model ... may include ...component analysis”); generate an instructional media content based on the vehicle component and the user intent ([0123], “The machine ... model ... may process the ... data ... to generate a user-activity ... based on the user interactions with a particular vehicle function”); fetch a 3D digital vehicle interior and exterior model from a memory of the vehicle ([0079], “the computing system … may retrieve … the information from the … computer-readable medium” Examiner notes that memory is a type of computer-readable medium.); and display the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model ([0064], “The … device … may include … a … display screen”). Response to Arguments Applicant's arguments filed January 6, 2026 have been fully considered but they are not persuasive. Claims 1-6, 8-17, and 19-20 remain pending in this application. With respect to claim 1, Applicant argues that “amended claim 1 now recites in part, ‘fetch the 3D digital vehicle interior and exterior model from the memory; and display the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model.’ The above-recited steps include actions that cannot be performed within a human mind or even with a pen and a paper” (See AMENDMENT AND RESPONSE TO NON-FINAL OFFICE ACTION, REMARKS, Claim Rejections - 35 U.S.C. § 101, page 7 of 13, paragraph 3), Applicant argues that “The claims are not merely ‘do it on a computer’, but rather they tie the method to specific technological components and uses.” (See AMENDMENT AND RESPONSE TO NON-FINAL OFFICE ACTION, REMARKS, Claim Rejections - 35 U.S.C. § 101, page 7 of 13, paragraph 5), and “Applicant submits that the claims provide ‘significantly more’ than an abstract idea.”. (See AMENDMENT AND RESPONSE TO NON-FINAL OFFICE ACTION, REMARKS, Claim Rejections - 35 U.S.C. § 101, page 8 of 13, paragraph 2). Examiner acknowledges Applicant’s remarks. However, Examiner notes in the 35 USC § 101 rejection, claim 1 recites a system to output vehicle instructions, the system comprising: a transceiver configured to receive a user input associated with a vehicle from a user; a memory configured to store (i) a trained machine model, wherein the trained machine model is trained using a training data that comprises a plurality of vehicle component identifiers and a plurality of user command intents and (ii) a three-dimensional (3D) digital vehicle interior and exterior model; and a processor communicatively coupled with the transceiver and the memory, wherein the processor is configured to: obtain the user input from the transceiver; determine a user intent based on the user input; identify a vehicle component associated with the user intent by executing instructions stored in the trained machine model; generate an instructional media content based on the vehicle component and the user intent; and fetch the 3D digital vehicle interior and exterior model from the memory; and display the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model. With the exception of the added steps (limitations) of claim 1, “and (ii) a three-dimensional (3D) digital vehicle interior and exterior model; fetch the 3D digital vehicle interior and exterior model from the memory; and display the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model”, the remaining steps (limitations) amount to a form of mental process and organizing human activity (i.e., an abstract idea) because a human can obtain user input associated with a vehicle, determine user intent based on the user input, identify a vehicle component associated with the user intent, and then determine instructions based on user intent and the corresponding vehicle component. Applicant of claimed invention discloses “There are known instances of users seeking assistance when they need to repair their vehicles or need guidance to operate one or more vehicle components … the users … review the owner manuals or search for “how-to” videos on the Internet to obtain the required assistance.” [0002]. Since Examiner views claim 1 in its entirety, the added steps (limitations) do not negate the abstract idea identified in the remaining steps. In prong two of step 2A, an evaluation is made whether a claim recites any additional element, or combination of additional elements, that integrate the exception into a practical application of that exception. An “additional element” is an element that is recited in the claim in addition to (beyond) the judicial exception (i.e., an element/limitation that sets forth an abstract idea is not an additional element). The phrase “integration into a practical application” is defined as requiring an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception. The requirement to execute the claimed steps/functions using computing devices (independent claims 1, 13, and 20 and dependent claims 2-6, 8-12, 14-17, and 19) is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. Similarly, the limitations of transceivers, memory/storage medium, and processors (independent claims 1, 13, and 20 and dependent claims 2-6, 8-12, 14-17, and 19) are recited at a high level of generality and amount to no more than mere instructions to apply the exception using generic computer components. These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application (see MPEP 2106.05(f)). The added steps (limitations) of “and (ii) a three-dimensional (3D) digital vehicle interior and exterior model; fetch the 3D digital vehicle interior and exterior model from the memory; and display the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model” require computing devices to fetch and display and therefore, fail to impose any meaningful limits on practicing the abstract idea. Use of a computer, processor, memory or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015) (See MPEP 2106.05(f)). Further, the additional limitations beyond the abstract idea identified above, serve merely to generally link the use of the judicial exception to a particular technological environment or field of use. Specifically, they serve to limit the application of the abstract idea to a computerized environment (e.g., identifying and displaying, etc.) performed by a computing device, processor, and memory, etc. This reasoning was demonstrated in Intellectual Ventures I LLC v. Capital One Bank (Fed. Cir. 2015), where the court determined "an abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment, such as the Internet [or] a computer"). These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application (see MPEP 2106.05(h)). The Examiner has therefore determined that the additional elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea. In step 2B, the claims are analyzed to determine whether any additional element, or combination of additional elements, are sufficient to ensure that the claims amount to significantly more than the judicial exception. This analysis is also termed a search for an “inventive concept.” An “inventive concept” is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim as a whole amount to significantly more than the judicial exception itself. Alice Corp., 573 U.S. at 27-18, 110 USPQ2d at 1981 (citing Mayo, 566 U.S. at 72-73, 101 USPQ2d at 1966). The identified additional elements in independent claims 1, 13, and 20 and dependent claims 2-12 and 14-19 are equivalent to adding the words “apply it” on a generic computer, and/or generally link the use of the judicial exception to a particular technological environment or field of use. Therefore, the claims as a whole do not amount to significantly more than the judicial exception itself. The added steps (limitations) of “and (ii) a three-dimensional (3D) digital vehicle interior and exterior model; fetch the 3D digital vehicle interior and exterior model from the memory; and display the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model” do not furnish an inventive concept, but are merely fetch and display functions that can be performed using memory, a processor, a monitor, etc. (computing devices). Viewing the additional limitations in combination also shows that they fail to ensure the claims amount to significantly more than the abstract idea. When considered as an ordered combination, the additional components of the claims add nothing that is not already present when considered separately, and thus simply append the abstract idea with words equivalent to “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer or/and append the abstract idea with insignificant extra solution activity associated with the implementation of the judicial exception, (e.g., mere data gathering, post-solution activity) and/or simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. Dependent claims 2-6, 8-12, 14-17, and 19 fail to include any additional elements. In other words, each of the limitations/elements recited in respective independent claims are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim). The Examiner has therefore determined that no additional element, or combination of additional claims elements are sufficient to ensure the claims amount to significantly more than the abstract idea identified above. Therefore, the rejections of claims 1-6, 8-17, and 19-20 under 35 U.S.C. § 101 are maintained. With respect to “Claim Rejections - 35 U.S.C. § 102”, Applicant argues “The model recited in claim 1 is fundamentally different, akin to a chatbot or digital assistant that interprets requests (e.g., training data might be pairs of phrases and relevant car components), which is outside the scope of Benqassmi. Therefore, the ‘trained machine model’ element as claimed is not disclosed by Benqassmi.” (See AMENDMENT AND RESPONSE TO NON-FINAL OFFICE ACTION, REMARKS, Claim Rejections - 35 U.S.C. § 102, page 10 of 13, lines 7-11.), “Claim 1 further recites in part, ‘generate an instructional media content based on the vehicle component and the user intent; fetch the 3D digital vehicle interior and exterior model from the memory; and display the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model.’ Benqassmi does not teach generating any media content for a user.” (See AMENDMENT AND RESPONSE TO NON-FINAL OFFICE ACTION, REMARKS, Claim Rejections - 35 U.S.C. § 102, page 10 of 13, paragraph 1), and “Benqassmi does not disclose storing a 3D vehicle interior and exterior model.” (See AMENDMENT AND RESPONSE TO NON-FINAL OFFICE ACTION, REMARKS, Claim Rejections - 35 U.S.C. § 102, page 10 of 13, paragraph 2). Examiner acknowledges Applicant’s remarks. Regarding claim 1, Benqassmi discloses a system to output vehicle instructions, the system comprising ([0020], “The computer-implemented method may include outputting … instructions for the vehicle”): a transceiver ([0228], “The computing system … may include … a … transceiver”) configured to receive a user input associated with a vehicle from a user ([0096], “The user … may interact with a vehicle … through user input”); a memory configured to store ([0011], “the control circuit may be … configured to store … instructions in an accessible memory on the vehicle”) (i) a trained machine model, wherein the trained machine model is trained using a training data that comprises a plurality of vehicle component identifiers and a plurality of user command intents ([0004], “The machine-learned … model may be configured to identify … user-activity … based on … user-selected settings and … observed conditions associated with the … user-selected settings.”); and a processor communicatively coupled with ([0216]-[0217], “The system … includes … a computing system … and a training computing system … that are communicatively coupled … the computing system … may include … processors”) the transceiver and the memory, wherein the processor is configured to: obtain the user input from the transceiver ([0258]-[0259], “the model … may include … files … loaded into memory and executed by … processors … the communication interfaces … may include … a … transceiver … for communicating data”); determine a user intent based on the user input ([0145], “The … model … may be configured to convert the user-activity … into … vehicle actions … that mimic the patterns of user activity.”); identify a vehicle component associated with the user intent by executing instructions stored in the trained machine model ([0123], “The machine ... model ... may process the ... data ... to generate a user-activity ... based on the user interactions with a particular vehicle function ... Using the machine ... model ... may include ...component analysis”); generate an instructional media content based on the vehicle component and the user intent ([0047], “the computing system may request that a user approve the … vehicle action”); and fetch the 3D digital vehicle interior and exterior model from the memory ([0079], “the computing system … may retrieve … the information from the … computer-readable medium” Examiner notes that memory is a type of computer-readable medium.); and display the instructional media content on a display screen by using the 3D digital vehicle interior and exterior model ([0064], “The … device … may include … a … display screen”) and Kim discloses and (ii) a three-dimensional (3D) digital vehicle interior and exterior model, ([0029], “the three-dimensional digital model … comprises a digital … car”). MPEP § 2111 discusses proper claim interpretation, including giving claims their broadest reasonable interpretation (“BRI”) in light of the specification during examination. Under BRI, the words of a claim must be given their plain meaning unless such meaning is inconsistent with the specification, and it is improper to import claim limitations from the specification into the claim. Applicant’s argument is not persuasive because the BRI is broader than what is argued. Therefore, the rejection of claim 1, as obvious by Benqassmi in view of Kim, is maintained. Consequently, the rejections of independent claims 13 and 20, which are similar to claim 1, and dependent claims 2-6, 8-12, 14-17, and 19-20 are maintained. With respect to “Claim Rejections - 35 U.S.C. § 103”, Applicant argues that “Kim does not teach automatically determining an "optimal" view angle for user comprehension.” (See AMENDMENT AND RESPONSE TO NON-FINAL OFFICE ACTION, REMARKS, Claim Rejections - 35 U.S.C. § 103, page 12 of 13, paragraph 1), “Simply because one could programmatically manipulate a model in Kim's system doesn't mean Kim taught any method or criteria for doing so.” (See AMENDMENT AND RESPONSE TO NON-FINAL OFFICE ACTION, REMARKS, Claim Rejections - 35 U.S.C. § 103, page 12 of 13, paragraph 1), and “Because neither reference recognizes the problem of selecting a viewpoint for explanatory clarity, the combination fails to render claim 8 obvious for these additional reasons” (See AMENDMENT AND RESPONSE TO NON-FINAL OFFICE ACTION, REMARKS, Claim Rejections - 35 U.S.C. § 103, page 12 of 13, paragraph 1). Examiner acknowledges Applicant’s remarks. Regarding claim 8, Benqassmi discloses to display the instructional media content, based on the vehicle component and the user intent ([0090], “The … system may … provide guidance to the user … via a display device” Examiner notes that a display device can display media content.);cause the display screen ([0064], “The … device … may include … a … display screen”) and cause the display screen to display the instructional media content responsive to rotating the default view angle to the optimal view angle ([0064], “The … device … may include … a … display screen”) and Kim discloses wherein the processor is further configured to: determine an optimal view angle associated with ([0154], “the digital segmentation system … can comprise … a processor” Examiner notes that results from a digital segmentation system can be used to calculate an optimal view angle.) the 3D digital vehicle interior and exterior model ([0029], “the three-dimensional digital model … comprises a digital … car”), to rotate a default view angle associated with the 3D digital vehicle interior and exterior model being displayed on the display screen to the optimal view angle ([0154], “the digital segmentation system … can comprise … a processor” Examiner notes that a digital segmentation system can be used to rotate a default view angle.). MPEP § 2111 discusses proper claim interpretation, including giving claims their broadest reasonable interpretation (“BRI”) in light of the specification during examination. Under BRI, the words of a claim must be given their plain meaning unless such meaning is inconsistent with the specification, and it is improper to import claim limitations from the specification into the claim. Applicant’s argument is not persuasive because the BRI is broader than what is argued. Therefore, the rejection of claim 8, as obvious by Benqassmi in view of Kim, is maintained. Consequently, the rejections of dependent claims 7 and 18 are deemed moot as a result of being cancelled, and dependent claim 19 is maintained. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Lisa Antoine whose telephone number is (571)272-4252. The examiner can normally be reached Monday - Thursday 8:30 am - 6:30 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. LISA H ANTOINE Examiner Art Unit 3715 /XUAN M THAI/Supervisory Patent Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Jan 12, 2024
Application Filed
Nov 28, 2025
Non-Final Rejection — §101, §102, §103
Jan 06, 2026
Response Filed
Feb 06, 2026
Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month