DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment and Arguments.
Applicant’s arguments, see page 8, filed 12-23-2025, with respect to claims 1-20 rejection have been fully considered and are not persuasive. Applicant argues that “eliminating the cumbersome engineers who need to identify customer complaints by individually review a file such as an Excel file, and gradually improving the accuracy of the analysis result of claim data through updating reference data” however this would still be a mental process of a person going through a piece of text and looking for onomatopoeias and or morphemes. Then based upon his finding determining a quality of the product whether it’s good or bad. Further searching for onomatopoeias via a certain method. Eliminating a human and improving accuracy does not remove the abstract idea. Therefore, the 101 rejection of claims 1- 20 are maintained.
Applicant’s arguments with respect to claim(s) 1 and 11 have been considered but are not persuasive. Applicant states that “However, nothing in Powell discloses, teaches or suggest that the "words" mentioned in paragraph [0048] of Powell are "onomatopoeias" or" morphemes" of claim 1 (emphasis added). Further, nowhere does Powell disclose, teach or suggest "onomatopoeias" or "morphemes" of claim 1 (emphasis added). Therefore, Powell does not cure Pham's failure of teaching the limitation of claim 1 of "labeling one or more ” However Powell discloses morphemes, (see paragraphs 50,10, 48, and 64), paragraph 50 describes word segmentation, paragraph 10 shows retrieving related words, paragraph 48 shows the tagging module and the lemma delta module 518, and finally paragraph 64 shows the description field. These paragraphs (mainly paragraph 48) teach seeing a morphological word (labelling) and transforming the word. In regards to the limitation “wherein the labeling one or more onomatopoeias comprises applying an edit distance algorithm to identify a representative keyword from pre-stored reference data having a shortest edit distance to a sequence of characters of the one or more onomatopoeias.” It is in the alternative form due to the labeling limitation. Since the labelling limitations says “one or more onomatopoeias or one or more morphemes ” and the rejection was made for the morphemes. Hence, the applicants’ argument for claim 1 and 11 are not persuasive and rejection is still maintained
Priority
Receipt is acknowledged that application claims priority to foreign application with application number KR10-2023-0075878 dated 06/14/2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1 and 11, Further claim 1 recites A method implemented by one or more hardware processors, the method comprising:
pre-processing input data including text and related to a vehicle;
performing text mining on the pre-processed input data;
labeling one or more onomatopoeias or one or more morphemes extracted from the pre-processed input data according to the text mining; and
determining a quality of the vehicle based on the labeling of the one or more onomatopoeias or the one or more morphemes extracted from the pre-processed input data according to the text mining
wherein the labeling one or more onomatopoeias comprises applying an edit distance algorithm to identify a representative keyword from pre-stored reference data having a shortest edit distance to a sequence of characters of the one or more onomatopoeias.
Further claim 11 states a memory; and a processor that, when executing computer executable instructions stored in the memory, is configured to:
The limitation of “pre-processing…”, “preforming…”, “determining…”, and “generating…” , as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person going through a piece of text and looking for onomatopoeias and or morphemes. Further based upon his finding determining a quality of the product whether it’s good or bad. Finally, searching for onomatopoeias via a certain method.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements that are computer components “processor” (paragraph 32 &37) and “memory” (paragraphs 32&36) recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using the computer components amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
Claims 2 and 12 additionally recite the method of claim 1, further comprising: after the performing of the text mining, determining the quality of the vehicle based on a default error identified in the pre-processed input data according to the text mining. However, this limitation does not prevent a human from performing the steps mentally as described above. Further, the person looking through the piece of text for a default (known ) error code. Based on this know code determining whether the products quality is good or bad. Thus, these claims are directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claims are not patent eligible.
Claims 3 and 13 additionally recites the method of claim 2, wherein the performing of the text mining comprises: extracting the one or more onomatopoeias from the pre-processed input data. However, these limitations encompass a person going through a piece of text looking for a onomatopoeias. Further extracting it or making note of it. Thus, these claims are directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claims are not patent eligible.
Claims 4 and 14 additionally recites the method of claim 3, wherein the performing of the text mining comprises: extracting the one or more morphemes from the pre-processed input data. However, these limitations encompass a person going through a piece of text looking for a morpheme. Further extracting it or making note of it. Thus, these claims are directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claims are not patent eligible.
Claims 5 and 15 additionally recites the method of claim 4, wherein the labeling of the one or more onomatopoeias or the one or more morphemes comprises: identifying one or more similar keywords associated with the one or more onomatopoeias extracted from pre-stored reference data; and labeling the one or more onomatopoeias with a representative keyword corresponding to the one or more similar keywords associated with the one or more onomatopoeias extracted from the pre-stored reference data. However, these limitations encompass a person going through a piece of text searching for onomatopoeias or morphemes. After finding them going through another piece of text looking for similar onomatopoeias or morphemes and labeling them. Thus, these claims are directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claims are not patent eligible.
Claims 6 and 16 additionally recites The method of claim 5, wherein the labeling of the one or more onomatopoeias or the one or more morphemes comprises: generating a set of the one or more morphemes extracted from the pre-processed input data by the text mining; identifying one or more similar keywords, related to the one or more morphemes included in the set of the extracted one or more morphemes, in the pre-stored reference data; and labeling the one or more morphemes with a representative keyword corresponding to the one or more similar keywords related to the one or more morphemes included in the set of the extracted one or more morphemes. However, these limitations encompass a person generating a set of similar words from the piece of text where he needs to look for onomatopoeias or morphemes. Further identifying/labeling similar words in a prestored data set for the onomatopoeias or morphemes. Thus, the claim is directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claims 7 and 17 additionally recites the method of claim 6, further comprising: after the labeling of the one or more onomatopoeias or the one or more morphemes, updating the pre-stored reference data with the labeled one or more onomatopoeias or the labeled one or more morphemes. However, these limitations encompass a person generating a set of similar words from the piece of text where he needs to look for onomatopoeias or morphemes. Further identifying/labeling similar words in a prestored data set for the onomatopoeias or morphemes. Finally updating the prestored data using the extracted onomatopoeias or more morphemes. Thus, the claim is directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claims 8 and 18 additionally recites the method of claim 2, wherein the determining of the quality of the vehicle based on the default error comprises: determining the quality of the vehicle based on a warning light of the vehicle. However, these limitations encompass a person looking at a product and seeing if there’s already a default error by a warning light represented in the text. Thus, the claim is directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claims 9 and 19 additionally recite the method of claim 7, wherein the determining of the quality of the vehicle based on the labeling of the one or more onomatopoeias or the one or more morphemes comprises: determining the quality of the vehicle based on the representative keyword labeled on the one or more onomatopoeias associated with noise generated in or by the vehicle. However, these limitations encompass a person looking through the piece of text and looking for a onomatopoeias. Where the person is able to see the text and say the screeching sound is when braking. Thus, the claim is directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claim 10 and 20 additionally recites the method of claim 7, wherein the determining of the quality of the vehicle based on the labeling of the one or more onomatopoeias or the one or more morphemes comprises: determining the quality of the vehicle related to an operation mode, performance, and other errors included oil leakage of the vehicle based on the representative keyword labeled on the one or more morphemes. However, these limitations encompass a person looking through the text for an operation, performance, and oil leakage. These would be encompassed in the text via onomatopoeias and/or a morpheme. Where the person is able to tell a quality of the product by analyzing said text. Thus, the claim is directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 8, 11-14, and 18 are rejected under 35 U.S.C. 103 as obvious over US Patent US 11915534 B1, (Pham; Phuong) in view of US Patent US 20050091031 A1, (Powell, Kevin R.).
Claim 1 and 11
Regarding Claim 1 and 11, Pham. teach
1. A method implemented by one or more hardware processors, the method comprising:
pre-processing input data including text and related to a vehicle;
(Col 5 lines 11-59 "Given that the user may be inexperienced with automotive technology and scan tool functionality, the first communication 12 uttered by the user may not be well formulated. It may, for example, be vague, incomplete, or ambiguous and/or may use unconventional (or incorrect) terminology. Example communications 12 may include “The car won't start” or “What's going on with my brakes?” or may provide even less context such as “What's that clicking sound?” or “What does this light mean?” Even given the capability of parsing these questions into words (e.g., by an automatic speech recognition or speech to text algorithm), these communications 12 lack necessary information that would, in principle, be needed to meaningfully instruct a DAT 120. To this end, after the first communication 12 is received from the user, the diagnostic methodology of FIG. 2 may proceed with processing the communication 12 using a natural language processing (NLP) model 130 to produce NLP model output (step 220). In a readily deployable embodiment of the disclosed subject matter, it is envisioned that a pretrained machine learning model such as a general purpose chatbot or virtual assistant (e.g., OpenAI's ChatGPT) may serve as the NLP model 130. The user's communication 12 may be uploaded or otherwise provided by the mobile device 110 as input to the NLP model 130, and the NLP model 130 may create NLP model output including a sequence of words, typically in the form of a human-readable response to the communication 12 that attempts to answer the user's question or otherwise address the user's need. As can be appreciated, such NLP model output may to varying degrees (depending on the particular NLP model 130) be thorough, detailed, lengthy, and/or simulate natural language, but may not necessarily be reliably accurate. This may be understood to be due to the nature of natural language processing, which may be thought of as a tool for mimicking responses to similar questions without truly “knowing” the answer to the question posed. In response to the user's communication 12 of “The car won't start,” for example, the NLP model output may be something like, “Here is a list of possible reasons that a vehicle won't start: the key fob has low battery or is not in range; the vehicle is low on fuel; the vehicle battery is depleted or faulty and may need to be replaced; there is a problem with the alternator; there is a problem with the timing belt; . . . ” etc. As described in more detail below, the system 100 may make use of the NLP model output not for its conventional purpose of providing an answer or solution for the user but, rather, as a means of elaborating upon and finetuning the user's original communication 12 for use in deriving one or more suitable scan tool functions to perform.")
performing text mining on the pre-processed input data;
(Col 5 lines 11-59 "Given that the user may be inexperienced with automotive technology and scan tool functionality, the first communication 12 uttered by the user may not be well formulated. It may, for example, be vague, incomplete, or ambiguous and/or may use unconventional (or incorrect) terminology. Example communications 12 may include “The car won't start” or “What's going on with my brakes?” or may provide even less context such as “What's that clicking sound?” or “What does this light mean?” Even given the capability of parsing these questions into words (e.g., by an automatic speech recognition or speech to text algorithm), these communications 12 lack necessary information that would, in principle, be needed to meaningfully instruct a DAT 120. To this end, after the first communication 12 is received from the user, the diagnostic methodology of FIG. 2 may proceed with processing the communication 12 using a natural language processing (NLP) model 130 to produce NLP model output (step 220). In a readily deployable embodiment of the disclosed subject matter, it is envisioned that a pretrained machine learning model such as a general purpose chatbot or virtual assistant (e.g., OpenAI's ChatGPT) may serve as the NLP model 130. The user's communication 12 may be uploaded or otherwise provided by the mobile device 110 as input to the NLP model 130, and the NLP model 130 may create NLP model output including a sequence of words, typically in the form of a human-readable response to the communication 12 that attempts to answer the user's question or otherwise address the user's need. As can be appreciated, such NLP model output may to varying degrees (depending on the particular NLP model 130) be thorough, detailed, lengthy, and/or simulate natural language, but may not necessarily be reliably accurate. This may be understood to be due to the nature of natural language processing, which may be thought of as a tool for mimicking responses to similar questions without truly “knowing” the answer to the question posed. In response to the user's communication 12 of “The car won't start,” for example, the NLP model output may be something like, “Here is a list of possible reasons that a vehicle won't start: the key fob has low battery or is not in range; the vehicle is low on fuel; the vehicle battery is depleted or faulty and may need to be replaced; there is a problem with the alternator; there is a problem with the timing belt; . . . ” etc. As described in more detail below, the system 100 may make use of the NLP model output not for its conventional purpose of providing an answer or solution for the user but, rather, as a means of elaborating upon and finetuning the user's original communication 12 for use in deriving one or more suitable scan tool functions to perform.")
determining a quality of the vehicle based on the labeling of the one or more onomatopoeias or the one or more morphemes extracted from the pre-processed input data according to the text mining.
(col 8 lines 59-67 and col 9 lines 0-67 and col 10 lines 0-3 "Once the DAT 120 has been instructed in step 240 of FIG. 2 and one or more scan tool functions have been initiated by the DAT 120 (see FIG. 1), the results of the function(s) may be used to diagnose the vehicle 10. In this regard, it may be appreciated that the DAT 120 or the mobile device 110 may, in general, upload diagnostic data collected from the vehicle 10 to the one or more servers 140, which may derive a diagnostic condition of the vehicle 10 from the uploaded diagnostic data by comparing the uploaded diagnostic data with data (e.g., historical data) stored in one or more diagnostic databases 150. The diagnostic condition of the vehicle 10 may include, for example, information about the root cause of a problem that the vehicle 10 is experiencing and/or an indication of one or more repair solutions or replacement parts for addressing the problem. Exemplary diagnostic methods, including the use of such diagnostic data to arrive at a most likely root cause and repair solution as well as vehicle-specific replacement parts, are described in the following U.S. patent documents, each of which is owned by Innova Electronics Corporation of Irvine, California: U.S. Pat. No. 6,807,469, entitled AUTO DIAGNOSTIC METHOD AND DEVICE, U.S. Pat. No. 6,925,368, entitled AUTO DIAGNOSTIC METHOD AND DEVICE, U.S. Pat. No. 7,620,484, entitled AUTOMOTIVE MOBILE DIAGNOSTICS, U.S. Pat. No. 8,068,951, entitled VEHICLE DIAGNOSTIC SYSTEM, U.S. Pat. No. 8,019,503, entitled AUTOMOTIVE DIAGNOSTIC AND REMEDIAL PROCESS, U.S. Pat. No. 8,370,018, entitled AUTOMOTIVE DIAGNOSTIC PROCESS, U.S. Pat. No. 8,909,416, entitled HANDHELD SCAN TOOL WITH FIXED SOLUTION CAPABILITY, U.S. Pat. No. 9,014,908, entitled MULTI-STAGE DIAGNOSTIC SYSTEM AND METHOD, U.S. Pat. No. 9,142,066, entitled MULTI-STAGE DIAGNOSTIC SYSTEM AND METHOD, U.S. Pat. No. 9,026,400, entitled DIAGNOSTIC PROCESS FOR HOME ELECTRONIC DEVICES, U.S. Pat. No. 9,177,428, entitled PREDICTIVE DIAGNOSTIC METHOD, U.S. Pat. No. 9,646,432, entitled HAND HELD DATA RETRIEVAL DEVICE WITH FIXED SOLUTION CAPABILITY, U.S. Pat. No. 9,824,507, entitled MOBILE DEVICE BASED VEHICLE DIAGNOSTIC SYSTEM, U.S. Pat. No. 10,643,403, entitled PREDICTIVE DIAGNOSTIC METHOD AND SYSTEM, U.S. Pat. No. 11,068,560, entitled METHOD OF PROCESSING VEHICLE DIAGNOSTIC DATA, U.S. Pat. No. 11,270,529, entitled SYSTEM AND METHOD FOR PROACTIVE VEHICLE DIAGNOSIS AND OPERATIONAL ALERT, and U.S. Pat. No. 11,158,141, entitled SYSTEM AND METHOD FOR PROACTIVE VEHICLE DIAGNOSIS AND OPERATIONAL ALERT, the entire contents of each of which is expressly incorporated by reference herein. The operational flow of FIG. 2 may thus continue with receiving diagnostic data from the DAT 120 (step 250), the diagnostic data having been retrieved from the vehicle 10 by the DAT 120 in accordance with the selected function(s) of step 240, and determining a diagnostic condition of the vehicle 10 (step 260). Advantageously, the system 100 (e.g., the app) may determine the diagnostic condition based on the diagnostic data and, in some cases, further based on the one or more keywords that were extracted from the NLP model output in step 230. In this way, the diagnostic analysis performed by the system 100 may take into account both the expanded brainstorm of symptoms and potential diagnostic conditions derived by the NLP model 130 from the user's original communication 12 and the actual diagnostic data of the vehicle 10 collected by the DAT 120 for a more accurate and relevant diagnosis. Environmental and other context data as described above, such as contemporaneously captured sensor data of the mobile device 110, may also be taken into account to further improve the results. For example, diagnostic data might include airbag sensor data indicating that the airbag was inflated, which may be corroborated by accelerometer data of the mobile device 110 indicating that the vehicle 10 experienced a sudden change in velocity suggestive of a collision. By corroborating the diagnostic data with environmental data in this way, a situation may be avoided where the family members of a passenger are notified prematurely of a crash when really there was simply a faulty airbag sensor."
Col 10 lines 5-37 "In lieu of, or in addition to the above-described techniques for deriving a diagnostic solution or diagnostic condition based on use of a vehicle specific historical database, artificial intelligence (AI) may be used to recognize associations between certain vehicle data (e.g., live data, patterns of live data and static data), NLP model output, and/or other vehicle operational and environmental conditions (e.g., sensed vibrations, sounds, vapors, temperatures, etc.) with the diagnostic solution or condition, thus enabling the determination of a diagnostic solution or condition without the need to reference the historical database. For example, a machine learning model may be trained using historical data such as the historical data stored in the diagnostic database(s) 150, such that the machine learning model becomes increasingly capable of associating patterns of input data with diagnostic conditions. Subsequently, for a given set of new input data, the system 100 may derive the diagnostic condition(s) of the vehicle 10 using the machine learning model without there being any need to further consult the historical data. The AI may further be operative to identify external resources suitable to further inform the user with respect to the diagnostic solution and establish communication with such resources via an appropriate and available communication pathway, such as a cellphone enabled pathway or a V2X pathway to V2X service providers and associated resources. That functionality may proceed autonomously in response to receipt of the vehicle data, in response to an evaluation of the urgency of the vehicle diagnostic solution, or on-demand, in response to a user input. Another exemplary application of the AI enabled diagnostic process would be to support and facilitate the evolution of advanced driver-assistance systems (ADAS), in relation to installation, testing and/or monitoring of those systems during driving conditions.")
Pham do not explicitly teach all of labeling one or more onomatopoeias or one or more morphemes extracted from the pre-processed input data according to the text mining; and
wherein the labeling one or more onomatopoeias comprises applying an edit distance algorithm to identify a representative keyword from pre-stored reference data having a shortest edit distance to a sequence of characters of the one or more onomatopoeias.
However, Powell teaches labeling one or more onomatopoeias or one or more morphemes extracted from the pre-processed input data according to the text mining; and
(Paragraph 48 "Word list or vocabulary 510 is received by tagging module 512, which processes or tags words in accordance with the present invention to construct lexicon 308, 540. Generally, tags indicate certain syntactic and/or semantic information about words that is useful when accessed by applications or systems. Tagging module 512 comprises sub-modules that can include any or all of the following: spelling and dynamic segmentation module 514, part of speech module 516, lemma delta module 518, description module 520, and static segmentation mask module 522. Each tagging sub-module adds bits of information or tags for each entry in lexicon 308, 540."
The labelling would be the tagging for the morphemes)
wherein the labeling one or more onomatopoeias comprises applying an edit distance algorithm to identify a representative keyword from pre-stored reference data having a shortest edit distance to a sequence of characters of the one or more onomatopoeias.
(The limitation above is in an alternative form. Due to the labeling limitation, further the morphemes was selected based on the “or”. )
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pham to incorporate the teachings of Powell to provide a “labeling one or more onomatopoeias or one or more morphemes extracted from the pre-processed input data according to the text mining; and.” Doing so would Be able to quickly retrieve information, as recognized by Powell. (Paragraph 53).
Claim 11
Claim 11 contains limitations similar to those found in claims 1. See claim 1 for remaining limitations.
Regrading claim 11, Pham further teaches
11. A apparatus for determining a quality of a vehicle using text mining, the apparatus comprising:
a memory; and
(col 16 lines 27-49 "The functionality described above in relation to the mobile device 110, DAT 120, and server(s) 150 shown in FIG. 1 and the operational flows and flow charts described in relation to FIGS. 2-9 and throughout the disclosure may be wholly or partly embodied in one or more computers including a processor (e.g. a CPU), a system memory (e.g. RAM), and a hard drive or other secondary storage device. The processor may execute one or more computer programs, which may be tangibly embodied along with an operating system in a computer-readable medium, e.g., the secondary storage device. The operating system and computer programs may be loaded from the secondary storage device into the system memory to be executed by the processor. The computer may further include a network interface for network communication between the computer and external devices (e.g., over the Internet), such as between the mobile device 110 and the DAT 120 or between the mobile device 110 or DAT 120 and the server(s) 140 or third-party computer systems associated with the NLP model 130. To the extent that any of the described functionality may be performed by the server(s) 140, the server(s) 140 may comprise multiple physical servers and other computers that communicate with each other to perform the described functionality.")
a processor that, when executing computer executable instructions stored in the memory, is configured to:
(col 16 lines 27-49 "The functionality described above in relation to the mobile device 110, DAT 120, and server(s) 150 shown in FIG. 1 and the operational flows and flow charts described in relation to FIGS. 2-9 and throughout the disclosure may be wholly or partly embodied in one or more computers including a processor (e.g. a CPU), a system memory (e.g. RAM), and a hard drive or other secondary storage device. The processor may execute one or more computer programs, which may be tangibly embodied along with an operating system in a computer-readable medium, e.g., the secondary storage device. The operating system and computer programs may be loaded from the secondary storage device into the system memory to be executed by the processor. The computer may further include a network interface for network communication between the computer and external devices (e.g., over the Internet), such as between the mobile device 110 and the DAT 120 or between the mobile device 110 or DAT 120 and the server(s) 140 or third-party computer systems associated with the NLP model 130. To the extent that any of the described functionality may be performed by the server(s) 140, the server(s) 140 may comprise multiple physical servers and other computers that communicate with each other to perform the described functionality.")
a display configured to display information related to the determined quality of the vehicle.
(Col 12 lines 31-40 "In this way, the system 100 may determine the diagnostic condition of the vehicle 10 in accordance with step 260 of FIG. 2 and present the diagnostic condition to the user, verbally and/or on screen on the mobile device 110. Depending on the available functions of the DAT 120, the user may continue to drill down and perform additional diagnostic functions as may be recommended by the system 100 and selected or confirmed by the user. As shown, for example, the flow chart of FIG. 4B continues with the presentation to the user of options for further processing (which may be retrieved from the server(s)/database(s) 140, 150 as described above): “Here are some options to solve in case of replacing the oxygen sensor: Oxygen Sensor Explanation, Active Test: Oxygen Sensor1, Oxygen Sensor Part Number.” The user may, for example, choose to conduct a special function test such as “Active Test: Oxygen Sensor1,” which would result in the system 100 retrieving more specific diagnostic data from the vehicle 10 using the DAT 120, which would then be used to retrieve a more specific diagnostic condition of the vehicle 10 for presentation to the user. The more specific diagnostic data may include, for example, live data, which may be found to support or refute a presumed diagnostic condition (such as a faulty oxygen sensor). In this way, an iterative application of the disclosed subject matter may enable a conversational guided diagnostic procedure, with the user able to drill down as desired to delve further into potential root causes through a back-and-forth dialogue with the system 100. In the example of FIG. 4B, the user instead chooses to exit the conversation by saying “Got it, no further help needed.”")
Claim 2 and 12
Regarding Claim 2 and 12, Pham in view of Powell, Pham Teach
2. The method of claim 1, further comprising:
after the performing of the text mining, determining the quality of the vehicle based on a default error identified in the pre-processed input data according to the text mining.
(Col 11 lines 7-50 "All or portions of the above-described operational flow of FIG. 2 may be performed iteratively to enable a back-and-forth conversation between the user and the system 100. An extended example showing one such iterative application of the disclosed subject matter is shown in the flow chart of FIGS. 5A and 5B. The example may begin with the user detecting that there is a light illuminated on the dashboard and asking what is happening with the dashboard light. From the user's perspective, the system 100 (e.g., the app) may reply verbally to the user, “Your Check Engine light is ON, which means a potential problem with the engine or emissions system,” followed by “Here are some options to detect where the problem is: Read Generic Diagnostic Trouble Code (DTC), Read Engine Control Module (ECM) Diagnostic Trouble Code (DTC), Scan All Systems,” at which point the system 100 may wait for further input from the user. Behind the scenes, the processing steps may involve one or more iterations of the operational flow described in relation to FIG. 2 or portions thereof. To process the initial question about the dashboard light, the mobile device 110 with the installed app may initially use OBD data to determine which dashboard light is currently lit by requesting a warning light status using the DAT 120. For example, the initial communication 12 “What happen with the dashboard light?” may be processed using the NLP model 130 to derive keywords, and the keywords may be mapped to the scan tool function of requesting the warning light status. Alternatively, in some implementations the app may first check whether the initial question itself has suitable keywords (e.g., “dashboard light”) for selecting a suitable scan tool function, in which case the NLP model 130 may not be needed for this initial step. Upon receiving the OBD data from the vehicle 10 indicating that the check engine light is ON, the app may perform further processing to provide additional information to the user and come up with a list of relevant scan tool functions as shown in FIG. 4A. Advantageously, the system 100 may derive this explanatory information and proposed scan tool functions by supplementing the user's initial communication 12 with the actual vehicle data indicating that the check engine light is on, processing the resulting query using the NLP model 130, deriving keyword(s) from the NLP model output, and communicating with one or more server(s) 140 to retrieve the information and proposed functions from one or more diagnostic databases 150.")
Claim 3 and 13
Regarding Claim 3 and 13, Pham in view of Powell, Pham Teach
3. The method of claim 2, wherein the performing of the text mining comprises:
extracting the one or more onomatopoeias from the pre-processed input data.
(Col 5 lines 11-59 "Given that the user may be inexperienced with automotive technology and scan tool functionality, the first communication 12 uttered by the user may not be well formulated. It may, for example, be vague, incomplete, or ambiguous and/or may use unconventional (or incorrect) terminology. Example communications 12 may include “The car won't start” or “What's going on with my brakes?” or may provide even less context such as “What's that clicking sound?” or “What does this light mean?” Even given the capability of parsing these questions into words (e.g., by an automatic speech recognition or speech to text algorithm), these communications 12 lack necessary information that would, in principle, be needed to meaningfully instruct a DAT 120. To this end, after the first communication 12 is received from the user, the diagnostic methodology of FIG. 2 may proceed with processing the communication 12 using a natural language processing (NLP) model 130 to produce NLP model output (step 220). In a readily deployable embodiment of the disclosed subject matter, it is envisioned that a pretrained machine learning model such as a general purpose chatbot or virtual assistant (e.g., OpenAI's ChatGPT) may serve as the NLP model 130. The user's communication 12 may be uploaded or otherwise provided by the mobile device 110 as input to the NLP model 130, and the NLP model 130 may create NLP model output including a sequence of words, typically in the form of a human-readable response to the communication 12 that attempts to answer the user's question or otherwise address the user's need. As can be appreciated, such NLP model output may to varying degrees (depending on the particular NLP model 130) be thorough, detailed, lengthy, and/or simulate natural language, but may not necessarily be reliably accurate. This may be understood to be due to the nature of natural language processing, which may be thought of as a tool for mimicking responses to similar questions without truly “knowing” the answer to the question posed. In response to the user's communication 12 of “The car won't start,” for example, the NLP model output may be something like, “Here is a list of possible reasons that a vehicle won't start: the key fob has low battery or is not in range; the vehicle is low on fuel; the vehicle battery is depleted or faulty and may need to be replaced; there is a problem with the alternator; there is a problem with the timing belt; . . . ” etc. As described in more detail below, the system 100 may make use of the NLP model output not for its conventional purpose of providing an answer or solution for the user but, rather, as a means of elaborating upon and finetuning the user's original communication 12 for use in deriving one or more suitable scan tool functions to perform.")
Claim 4 and 14
Regarding Claim 4 and 14, Pham in view of Powell, Pham Teach
4. The method of claim 3, wherein the performing of the text mining comprises:
extracting the one or more morphemes from the pre-processed input data.
(Col 5 lines 60-67 and col 6 0-41 "In some cases, the step of processing the user's communication 12 using the NLP model 130 may include supplementing the communication 12 with a context related to the vehicle 10. Given that the NLP model 130 may be a general-purpose model that is not designed specifically for dialogue about vehicles, it may be especially advantageous for the system 100 to provide a context for questions that would otherwise not be recognizable as having to do with a vehicle, such as “What's that clicking sound?” or “What does this light mean?” In a relatively simple embodiment, for example, the mobile device 110 (e.g., under the control of the app) may simply append a phrase such as “in a vehicle” or “for vehicle diagnostics” to the user's communication 12. Thus, “What's that clicking sound?” or “What does this light mean?” may become “What's that clicking sound in a vehicle?” or “What does this light mean in a vehicle?” A context like this, which may have universal applicability for any envisioned communication 12 of the user, may be redundant in some cases (e.g., “The car won't start” becomes “The car won't start in a vehicle”) but nondetrimental, while providing important information to the NLP model 130 in many cases where the context would otherwise not be clear. More specific contexts are also contemplated, such as where the context includes identifying information of the particular vehicle 10 in question. Identifying information of the vehicle 10 may include, for example, year/make/model/engine/trim information, which may be stored in a user profile associated with the app (e.g., locally or on a server), derived from a VIN retrieved from the vehicle 10 by the DAT 120, or, in some cases, deduced by the system 100 from other diagnostic data of the vehicle 10. By providing identifying information of the particular vehicle 10 to the NLP model 130 together with the user's communication 12, it may be appreciated that the resulting NLP model output may be more relevant. Other types of context that may be used to supplement the user's communication 12 are also contemplated, such as a previous communication 12 of the user related to the vehicle 10 or a state (e.g., ON/OFF, vehicle speed, gear) of the vehicle 10, which may be derived from diagnostic data retrieved from the vehicle 10 by the DAT 120, for example. In some cases, a context may include environmental data derived from one or more sensors (e.g., a vapor sensor) that may be present in or near the vehicle 10 including sensors or other components of the mobile device 110. For example, an accelerometer built into the mobile device 110 may indicate that acceleration is not smooth or that the vehicle, potentially pointing to a fuel line function or that the vehicle 10 is experiencing a bumpy ride, suggesting a tire pressure issue."
col 5 lines 11-59 "Given that the user may be inexperienced with automotive technology and scan tool functionality, the first communication 12 uttered by the user may not be well formulated. It may, for example, be vague, incomplete, or ambiguous and/or may use unconventional (or incorrect) terminology. Example communications 12 may include “The car won't start” or “What's going on with my brakes?” or may provide even less context such as “What's that clicking sound?” or “What does this light mean?” Even given the capability of parsing these questions into words (e.g., by an automatic speech recognition or speech to text algorithm), these communications 12 lack necessary information that would, in principle, be needed to meaningfully instruct a DAT 120. To this end, after the first communication 12 is received from the user, the diagnostic methodology of FIG. 2 may proceed with processing the communication 12 using a natural language processing (NLP) model 130 to produce NLP model output (step 220). In a readily deployable embodiment of the disclosed subject matter, it is envisioned that a pretrained machine learning model such as a general purpose chatbot or virtual assistant (e.g., OpenAI's ChatGPT) may serve as the NLP model 130. The user's communication 12 may be uploaded or otherwise provided by the mobile device 110 as input to the NLP model 130, and the NLP model 130 may create NLP model output including a sequence of words, typically in the form of a human-readable response to the communication 12 that attempts to answer the user's question or otherwise address the user's need. As can be appreciated, such NLP model output may to varying degrees (depending on the particular NLP model 130) be thorough, detailed, lengthy, and/or simulate natural language, but may not necessarily be reliably accurate. This may be understood to be due to the nature of natural language processing, which may be thought of as a tool for mimicking responses to similar questions without truly “knowing” the answer to the question posed. In response to the user's communication 12 of “The car won't start,” for example, the NLP model output may be something like, “Here is a list of possible reasons that a vehicle won't start: the key fob has low battery or is not in range; the vehicle is low on fuel; the vehicle battery is depleted or faulty and may need to be replaced; there is a problem with the alternator; there is a problem with the timing belt; . . . ” etc. As described in more detail below, the system 100 may make use of the NLP model output not for its conventional purpose of providing an answer or solution for the user but, rather, as a means of elaborating upon and finetuning the user's original communication 12 for use in deriving one or more suitable scan tool functions to perform.")
Claim 8 and 18
Regarding Claim 8 and 18, Pham in view of Powell, Pham Teach
8. The method of claim 2, wherein the determining of the quality of the vehicle based on the default error comprises:
determining the quality of the vehicle based on a warning light of the vehicle.
(Col 11 lines 7-50 "All or portions of the above-described operational flow of FIG. 2 may be performed iteratively to enable a back-and-forth conversation between the user and the system 100. An extended example showing one such iterative application of the disclosed subject matter is shown in the flow chart of FIGS. 5A and 5B. The example may begin with the user detecting that there is a light illuminated on the dashboard and asking what is happening with the dashboard light. From the user's perspective, the system 100 (e.g., the app) may reply verbally to the user, “Your Check Engine light is ON, which means a potential problem with the engine or emissions system,” followed by “Here are some options to detect where the problem is: Read Generic Diagnostic Trouble Code (DTC), Read Engine Control Module (ECM) Diagnostic Trouble Code (DTC), Scan All Systems,” at which point the system 100 may wait for further input from the user. Behind the scenes, the processing steps may involve one or more iterations of the operational flow described in relation to FIG. 2 or portions thereof. To process the initial question about the dashboard light, the mobile device 110 with the installed app may initially use OBD data to determine which dashboard light is currently lit by requesting a warning light status using the DAT 120. For example, the initial communication 12 “What happen with the dashboard light?” may be processed using the NLP model 130 to derive keywords, and the keywords may be mapped to the scan tool function of requesting the warning light status. Alternatively, in some implementations the app may first check whether the initial question itself has suitable keywords (e.g., “dashboard light”) for selecting a suitable scan tool function, in which case the NLP model 130 may not be needed for this initial step. Upon receiving the OBD data from the vehicle 10 indicating that the check engine light is ON, the app may perform further processing to provide additional information to the user and come up with a list of relevant scan tool functions as shown in FIG. 4A. Advantageously, the system 100 may derive this explanatory information and proposed scan tool functions by supplementing the user's initial communication 12 with the actual vehicle data indicating that the check engine light is on, processing the resulting query using the NLP model 130, deriving keyword(s) from the NLP model output, and communicating with one or more server(s) 140 to retrieve the information and proposed functions from one or more diagnostic databases 150.")
Claims 5-7, 9, 10, 15-17, 19, and 20 are rejected under 35 U.S.C. 103 as obvious over US Patent US 11915534 B1, (Pham; Phuong) in view of US Patent US 20050091031 A1, (Powell, Kevin R.) in further view of US Patent US 10074381 B1, (Cowburn; Piers.).
Claim 5 and 15
Regarding Claim 5 and 15, Pham in view of Powell do not explicitly teach all of The method of claim 4, wherein the labeling of the one or more onomatopoeias or the one or more morphemes comprises: identifying one or more similar keywords associated with the one or more onomatopoeias extracted from pre-stored reference data; and labeling the one or more onomatopoeias with a representative keyword corresponding to the one or more similar keywords associated with the one or more onomatopoeias extracted from the pre-stored reference data.
However, Cowburn teaches
5. The method of claim 4, wherein the labeling of the one or more onomatopoeias or the one or more morphemes comprises:
identifying one or more similar keywords associated with the one or more onomatopoeias extracted from pre-stored reference data; and
(Col 14 lines 41- 49 "Operation 708 may be performed by the transcription module 608. At operation 708, the transcription module 608 transcribes the speech to a text string. The transcription module 608 may reside within a client device 102, performing the transcription of the speech to text at the client device 102 itself, while in other example embodiments, the transcription module 608 may reside within a server system, remote from the client device 102, and delivering the transcribed speech to the client device 102."
col 16 lines 60-67 "Operation 904 may be performed by the detection module 604. At operation 604, the detection module 604 compares the non-verbal sound to an onomatopoeia library that includes a list of onomatopoeic words. For example, the detection module 604 may record a wave form representative of the non-verbal sound and compare the wave form to a list of onomatopoeic words with corresponding wave forms in the onomatopoeia library.")
labeling the one or more onomatopoeias with a representative keyword corresponding to the one or more similar keywords associated with the one or more onomatopoeias extracted from the pre-stored reference data.
(Col 17 lines 0-8 "Operation 906 may be performed by the detection module 604. At operation 906, the detection module 604 identifies an appropriate onomatopoeia from the onomatopoeia library based on the non-verbal sounds (e.g., the wave form representative of the non-verbal sound). In some example embodiments, the onomatopoeia library may include a list of graphical elements representative of their corresponding onomatopoetic word (e.g., an explosion for “boom”).")
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pham in view of Powell to incorporate the teachings of Cowburn to provide a “The method of claim 4, wherein the labeling of the one or more onomatopoeias or the one or more morphemes comprises: identifying one or more similar keywords associated with the one or more onomatopoeias extracted from pre-stored reference data; and labeling the one or more onomatopoeias with a representative keyword corresponding to the one or more similar keywords associated with the one or more onomatopoeias extracted from the pre-stored reference data.” Doing so would distinguish general sound and speech sound, as recognized by Cowburn. (Col 2 line 31-41).
Claim 6 and 16
Furthermore, Cowburn teaches
6. The method of claim 5, wherein the labeling of the one or more onomatopoeias or the one or more morphemes comprises:
generating a set of the one or more morphemes extracted from the pre-processed input data by the text mining;
(Col 3 lines 8- 13 "Operation 708 may be performed by the transcription module 608. At operation 708, the transcription module 608 transcribes the speech to a text string. The transcription module 608 may reside within a client device 102, performing the transcription of the speech to text at the client device 102 itself, while in other example embodiments, the transcription module 608 may reside within a server system, remote from the client device 102, and delivering the transcribed speech to the client device 102."
Col 3 lines 43-52 "For example, in some example embodiments, the augmented reality system may parse the text string of the transcribed speech into individual words, determine a definition of the set of words, and compare the definitions of the words to an emotional effect library. Based on the comparison, the augmented reality system may determine an intended emotional effect of the speech. For example, the text string may include a set of words that are typically associated with happiness, either literally, or based on context.")
identifying one or more similar keywords, related to the one or more morphemes included in the set of the extracted one or more morphemes, in the pre-stored reference data; and
(Col 16 lines 14-25 "In further embodiments, the detection module 604 may determine the emotional effect of the speech based on definitions of keywords in the context of the speech. For example, the detection module 604 may access the transcribed text string of the speech and determine definitions for each word of the text string. The detection module 604 may thereby compare the definitions of the speech to an emotional effect library, wherein the emotional effect library includes a set of emotions and corresponding words and definitions. The detection module 604 may thereby select an appropriate emotional effect based on the words and/or definitions.")
labeling the one or more morphemes with a representative keyword corresponding to the one or more similar keywords related to the one or more morphemes included in the set of the extracted one or more morphemes.
(Col 16 lines 14-25 "In further embodiments, the detection module 604 may determine the emotional effect of the speech based on definitions of keywords in the context of the speech. For example, the detection module 604 may access the transcribed text string of the speech and determine definitions for each word of the text string. The detection module 604 may thereby compare the definitions of the speech to an emotional effect library, wherein the emotional effect library includes a set of emotions and corresponding words and definitions. The detection module 604 may thereby select an appropriate emotional effect based on the words and/or definitions.")
See claim 5 and 15 for rationale.
Claim 7 and 17
Regarding Claim 7 and 17, Pham in view of Powell in view of Cowburn, furthermore, Powell teaches
7. The method of claim 6, further comprising:
after the labeling of the one or more onomatopoeias or the one or more morphemes, updating the pre-stored reference data with the labeled one or more onomatopoeias or the labeled one or more morphemes.
(Paragraph 45 "In some embodiments, lexicon construction and update module 500 comprises pre-processing module 504, which generates vocabulary or word list 510 of words to be entered into lexicon 308, 540 for a particular language. Word list 510 can also be a complete list of all words to be initially entered in lexicon 308, 540. Alternately, word list 510 can comprises new words to be added to lexicon 308, 540 in order to augment or update lexicon 308, 540.")
See claim 1 for rationale.
Claim 9 and 19
Regarding Claim 9 and 19, Pham in view of Powell in view of Cowburn teaches all the limitations of claim 7 and 17. Furthermore, Pham teaches,
9. The method of claim 7, wherein the determining of the quality of the vehicle based on the labeling of the one or more onomatopoeias or the one or more morphemes comprises:
determining the quality of the vehicle based on the representative keyword labeled on the one or more onomatopoeias associated with noise generated in or by the vehicle.
(Col 5 lines 60-67 and col 6 0-41 "In some cases, the step of processing the user's communication 12 using the NLP model 130 may include supplementing the communication 12 with a context related to the vehicle 10. Given that the NLP model 130 may be a general-purpose model that is not designed specifically for dialogue about vehicles, it may be especially advantageous for the system 100 to provide a context for questions that would otherwise not be recognizable as having to do with a vehicle, such as “What's that clicking sound?” or “What does this light mean?” In a relatively simple embodiment, for example, the mobile device 110 (e.g., under the control of the app) may simply append a phrase such as “in a vehicle” or “for vehicle diagnostics” to the user's communication 12. Thus, “What's that clicking sound?” or “What does this light mean?” may become “What's that clicking sound in a vehicle?” or “What does this light mean in a vehicle?” A context like this, which may have universal applicability for any envisioned communication 12 of the user, may be redundant in some cases (e.g., “The car won't start” becomes “The car won't start in a vehicle”) but nondetrimental, while providing important information to the NLP model 130 in many cases where the context would otherwise not be clear. More specific contexts are also contemplated, such as where the context includes identifying information of the particular vehicle 10 in question. Identifying information of the vehicle 10 may include, for example, year/make/model/engine/trim information, which may be stored in a user profile associated with the app (e.g., locally or on a server), derived from a VIN retrieved from the vehicle 10 by the DAT 120, or, in some cases, deduced by the system 100 from other diagnostic data of the vehicle 10. By providing identifying information of the particular vehicle 10 to the NLP model 130 together with the user's communication 12, it may be appreciated that the resulting NLP model output may be more relevant. Other types of contexts that may be used to supplement the user's communication 12 are also contemplated, such as a previous communication 12 of the user related to the vehicle 10 or a state (e.g., ON/OFF, vehicle speed, gear) of the vehicle 10, which may be derived from diagnostic data retrieved from the vehicle 10 by the DAT 120, for example. In some cases, a context may include environmental data derived from one or more sensors (e.g., a vapor sensor) that may be present in or near the vehicle 10 including sensors or other components of the mobile device 110. For example, an accelerometer built into the mobile device 110 may indicate that acceleration is not smooth or that the vehicle, potentially pointing to a fuel line function or that the vehicle 10 is experiencing a bumpy ride, suggesting a tire pressure issue.")
Claim 10 and 20
Regarding Claim 10 and 20, Pham in view of Powell in view of Cowburn teach all the limitations of claim 7 and 17. Furthermore, Pham teaches,
10. The method of claim 7, wherein the determining of the quality of the vehicle based on the labeling of the one or more onomatopoeias or the one or more morphemes comprises:
determining the quality of the vehicle related to an operation mode, performance, and other errors included oil leakage of the vehicle based on the representative keyword labeled on the one or more morphemes.
(col 7 lines 17-67 "With the keyword(s) having been extracted from the NLP model output, the operational flow of FIG. 2 may continue with instructing the DAT 120 based on the keyword(s) (step 240). The mobile device 110 (e.g., under the control of the app) may, for example, communicate with the DAT 120 via a wired or wireless connection to instruct the DAT 120 to perform a selected function based on the keyword(s). The available functions may depend on the manufacturer, with functions being added or optimized from time to time, and may include, by way of example, any or all of the following scan tool functions or variants thereof: read and clear OBD fault codes, read freeze frame, read and clear original equipment manufacturer (OEM) fault codes, read OBD live data, read OEM live data, monitor status, drive cycle procedure, service check (e.g., warning light status, brake pad check, battery status, oil life status, transmission temperature, etc.), tire pressure monitoring system (TPMS) pressure and status, active test, special function, workshop tools, etc. Selection of scan tool functions based on the keyword(s) may be done by mapping the keyword(s) to the functions. For example, a set of possible functions indexed by keywords (e.g., in the form of a lookup table or decision tree) may be stored locally on the mobile device 110 and managed by the app or may be stored remotely and accessed by the app through communication with one or more servers 140 (e.g., over the Internet via a cellular or WiFi network, for example). By using the keyword(s) as an index, the app may select one or more functions. For example, the keywords “battery,” “low battery,” “vehicle won't start,” “battery is depleted,” etc. may indicate that the DAT 120 should perform a function to check battery status, while the keywords “check engine,” “check engine light,” etc. may indicate that the DAT 120 should perform function(s) to read one or more types of diagnostic data from the vehicle 10. Additional example keywords might include “ABS code,” “read tire pressure,” “test battery,” “check brake pad,” “SRS light,” “ignition,” which may be matched or otherwise mapped to related scan tool functions. Instead of or in addition to using the keywords as an index, the keywords may be input to a machine learning model whose output indicates the one or more function(s), with the machine learning model having been trained on sets of keywords (and possibly associated diagnostic data) to generate a relevant set of function(s). In this way, artificial intelligence (AI) may be used to associate various keywords or combinations of keywords (which may be representative of vehicle-related concepts) with relevant scan tool functions. Environmental and other context data as described above, such as contemporaneously captured sensor data of the mobile device 110, may also be input to the machine learning model to improve the relevance of the determined function(s).")
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALI M HASSAN whose telephone number is (571)272-5331. The examiner can normally be reached Monday - Friday 8:00am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALI M HASSAN/Examiner, Art Unit 2653
/Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 03/16/2026