Prosecution Insights
Last updated: April 19, 2026
Application No. 18/826,649

APPARATUS AND METHOD WITH DEFECT-CAUSE RECOMMENDING

Non-Final OA §101§103§112
Filed
Sep 06, 2024
Examiner
HOANG, SON T
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
754 granted / 905 resolved
+28.3% vs TC avg
Strong +35% interview lift
Without
With
+35.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
21 currently pending
Career history
926
Total Applications
across all art units

Statute-Specific Performance

§101
19.7%
-20.3% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 905 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status This instant application No. 18/826,649 has claims 1-20 pending. Priority / Filing Date Applicant’s claim for priority of foreign application No. KR 10-2024-0048829 (filed on April 11, 2024) is acknowledged. However, it appears that there is no certified copy of the foreign application to perfect the priority claim. Thus, the claim for priority is not effective. For the purpose of examination, the effective filing date is September 6, 2024 Abstract The abstract of the disclosure is objected due to the use of implied language. Note that in the abstract, the language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc… See MPEP § 608.01(b). Note that in the abstract, Applicant cites “An apparatus and method for recommending a defect-causing process are disclosed” on line 1. This citation clearly provokes the use of implied language and/or repeats the title. Revision and/or correction are required. One example is as follows: “An apparatus for recommending a defect-causing process includes a communication interface…” Drawings The drawings filed on September 6, 2024 are acceptable for examination purposes. Information Disclosure Statement As required by M.P.E.P. 609(C), the Applicant’s submission of the Information Disclosure Statement filed on September 6, 2024 is acknowledged by the Examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P. 609 C(2), a copy of the PTOL-1449 initialed and dated by the Examiner is attached to the instant Office action. Claim Objections Claim 5 is objected for citing “… match the suspect facility and the suspected chamber… “ It is believed “…match the suspected facility and the suspected chamber… “ is more appropriate. Correction and/or revision are required. Claim 8 is objected for citing “…calculate a probabilities of candidates of… It is believed …calculate a probability of candidates of… “ is more appropriate. Correction and/or revision are required. Claim 16 is objected for citing “determine a query vector… “ as the third limitation whereas it is believed “determining a query vector…“ is more appropriate. Further, there is no antecedent basis for “the information” in the limitation of “..by encoding the information related to the defect…” Correction and/or revision are required. Claim 19 is similarly objected based on the reason(s) presented above in claim 5. Claim 20 is objected for having no antecedent basis for the similar case in the citation of “…searching for the similar case, the similar case comprising…” Correction is required. 35 USC § 112(f) - Claim Interpretations The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Claims 2-3, 5-7, 9, and 11-12 have been interpreted under 35 U.S.C. 112(f) because each claim uses a generic placeholder coupled with functional language without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. In claim 2: “an encoder module configured to…,” “a matching module configured to…,” “a paraphrasing module configured to…” (it is noted that a large language model is not interpreted under 35 U.S.C. 112(f) since it conveys a specific kind of software structure (e.g., a trained neural network model) known to a person skilled in the art). In claim 3: “a preprocessor configured to…” (it is noted that a first encoder and a second encoder are not interpreted under 35 U.S.C. 112(f) since they convey known software/hardware structure to a person skilled in the art (e.g., text or image encoder) and the specification further identifies theirs roles within a transformer-based encoder network). In claim 5: “the matching module configured to…” In claim 6: “an adapter configured to…,” “a retriever configured to…,” “a masking module configured to…” In claim 7: “the adapter configured to…” In claim 9: “the retriever configured to…” In claim 11: “a data frame module configured to…” In claim 12: “the paraphrasing module configured to…” If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f), applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f), or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f). For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011). Claim Rejections - 35 USC § 112 The following is a quotation of 35 US.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 2-3, 5-7, 9, and 11-12 are rejected under 35 U.S.C. 112(b) since their limitations invoke 35 U.S.C. 112(f) but fail to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function in the written description. Therefore, the claims are indefinite and are rejected under 35 U.S.C. 112(b). In claim 2: For the encoder module: while the specification identifies it is including a preprocessor first encoder and second encoder the description does not disclose a sufficiently definite algorithm that performs the full claim function of extracting a first feature corresponding to a first modality and or a second feature corresponding to a second modality by encoding the information related to the defect phenomenon based on the identification information in a step by step manner for a special purpose computer instead and the disclosure is at a high level results oriented description. For the matching module: the specification generically describes an adapter retriever and masking module but fails to disclose a specific complete algorithm that given the encoded features search for the similar case and outputs a suspected process facility chamber that match a query vector beyond naming generic attention similarity and masking concepts such disclosure is insufficient as a corresponding structure algorithm under 35 U.S.C. 112(f). For the paraphrasing module: the specification merely states that it generates a prompt based on the user’s inquiry in similar cases without providing a detailed algorithmic procedure for how the input fields are transformed into the particular prompt. Generic examples of prompts in high level description of paraphrasing were not sufficient corresponding structure algorithm for a computer implemented means-plus-function element. Because the specification fails to disclose adequate corresponding structure algorithms for these 35 U.S.C. 112(f) limitations the scope of claim 2 cannot be determined with reasonable certainty. This renders the claim indefinite under 35 U.S.C.112(b). In claim 3, the encoder module is further recited to comprise a preprocessor but does not cure the lack of clearly disclosed algorithm in the specification for the encoder modules overall claim function of extracting the first and second features by encoding defect related information based on identification information. Even though the preprocessor is named as a component, the specification does not provide a stepwise algorithm sufficient to define the scope of the encoder module with reasonable certainty. Thus, claim 3 remains indefinite under 35 U.S.C. 112(b). In claim 5, the matching module is recited to be configured to match the suspect facility in the suspected chamber corresponding to the suspected process with the query vector based on production information. When interpreted under 35 U.S.C 112(f), the claimed function matches the suspect facility in the suspected chamber with the query vector based on production information lacks sufficient corresponding structure algorithm in the specification The description mentions that the matching module may use production information to exclude some cases and then associate facilities chambers but does not set forth a concrete algorithm such a specific selection rules thresholding or mapping operations that a special purpose computer must perform. Thus, claim 5 remains indefinite under 35 U.S.C. 112(b). In claim 6, the matching module is recited to comprise an adapter configured to…, a retriever configured to…, and a masking module configured to implement the claim limitations. For the adapter: the specification generically describes a feed-forward network and metric learning but does not provide a complete algorithm sufficient to perform the full claim function of converting first and second features into the query vector including the specific steps and data flows required in a special purpose machine. For the retriever: although the description references scaled dot product attention and nonparametric classification, it does so at a high level and without providing the full set of algorithmic operations inputs transformations outputs that unambiguously correspond to the claimed function of searching sample cases to find the similar case and deriving the suspected process. For the masking module: the specification indicates that it masks some sample cases based on production information but fails to explain the algorithmic criteria or rules for such masking, e.g., how production information is evaluated how cases are selected or removed which is essential to provide corresponding structure for the means-plus-function masking module. Because the specification does not disclose sufficient corresponding structure algorithms for these 35 U.S.C. 112(f) elements, the scope of claim 6 cannot be determined with reasonable certainty and the claim is indefinite under 35 U.S.C. 112(b). In claim 7, the adapter configured to… is recited with the same insufficiencies as presented in claim 6. In claim 9, the retriever configured to… is recited with the same insufficiencies as presented in claim 6. In claim 11, a data frame module configured to… is recited to collect the suspected process, the suspected facility, and the suspected chamber and convert the collected into information in a standardized form. The specification refers to a data frame module and a standardized form but does not disclose any concrete algorithm describing like how data are collected from the unmasked similar cases eve how fields are mapped into the standardized form or eat what rules govern the conversion. Without a specific algorithm for performing this standardization, the data frame module lacks adequate corresponding structure under 35 U.S.C. 112(f) and the scope of claim 11 cannot be determined with reasonable certainty. Therefore claim 11 is indefinite under 35 U.S.C. 112(b). In claim 12, the paraphrasing module configured to… is recited with the same insufficiencies as presented in claim 2. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f); (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The claimed invention in claims 1-20 are directed to a judicial exception (i.e., an abstract idea) without significantly more. Claims 1-20 pass step 1 of the 35 U.S.C. 101 analysis since each claim is either directed to an apparatus comprising one or more processors and memory (i.e., hardware components per [0143] of instant specification and as known in the art), or a method. Claim 1 recites, in part, elements that are directed to an abstract idea (“Courts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind.” Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015)). Each claim recites the limitations of implementing a neural network model configured to search for a similar case related to the defect phenomenon by encoding information… The limitations, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components (e.g., mentally implementing a model with a mental search for desired data based on mentally analyzed / encoded information via mathematical concepts). The core of this instant claim is the mental process of a human expert comparing a current defect to past defect history which is automated by a computer. That is, other than reciting generic components (e.g., processor, memory, and computer-executable instructions), nothing in the claim precludes the limitations from being performed in the human mind per step 2A – prong 1 of the Abstract Idea Analysis. Thus, the limitations are parts of a mental process. Further, the claims recite additional steps receive[ing] a user’s inquiry…; and generating a response to the user’s inquiry … which are an extra-solution activities (per step 2A – prong 2 of the Abstract Idea Analysis) that cannot be integrated into a practical application (e.g., the elements recite trivial elements that occurred or would occur after the mental process) since they merely recite processing inputs and generating results based on the processed inputs. Each of the additional limitation(s) is no more than mere instructions to apply the exception using a generic computer component (e.g., processor, memory, and computer-executable instructions). The extra-solution activities in step 2A - prong 2 are reevaluated in step 2B to determining if each limitation is more than what is well-understood, routine, conventional activity in the field. The background of the limitations does not provide any indication that the computer components (e.g., processor, memory, and computer-executable instructions) are not off-the-shelf computer components. The Symantec, TLI, and OOP Techs court decisions cited in MPEP 2106.05(d)(II) indicate that mere receiving, generating, storing, determining, identifying, and transmitting of data over a network are a well-understood, routine, and conventional functions when claimed in a merely generic manner (as it is here). Accordingly, a conclusion that the claims are well-understood, routine, conventional (WURC) activity is supported under Berkheimer Option 2. For these reasons, there is no inventive concept in this claim, thus, the claim is ineligible. Claim 2 further recites steps of extract[ing] a first/second feature…by encoding the information…; search[ing] for the similar case… which can be implemented in a human mind and/or with the aid of pen/paper similar to the analysis above (e.g., writing down on paper the first/second feature by mentally encoding the information; and mentally searching for the similar case). The other additional elements of generate[ing] the prompt…; and generate[ing] the response are extra-solution and WURC activities similar to the above analysis (e.g., generating outputs based on processed inputs). It is noted that all the encoder module, matching module, paraphrasing module, and large language model are merely functional software components without any specific hardware improvements recited that can be amounted to more than the abstract idea itself. Thus, the claim is ineligible. Claim 3 further recites steps of convert[ing] the information…into a form…; extract[ing] the first/second feature… which can be implemented in a human mind and/or with the aid of pen/paper similar to the analysis above (e.g., mentally converting the information into a form; and writing down on paper the first/second feature). It is noted that all the preprocessor, first encoder, and second encoder are merely functional software components without any specific hardware improvements recited that can be amounted to more than the abstract idea itself. Thus, the claim is ineligible. Claim 4 merely provides definition for the encoder module comprises a transformer-based encoder network. It is noted that such transformer-based encoder network is merely a functional software component without any specific hardware improvements recited that can be amounted to more than the abstract idea itself. Thus, the claim is ineligible. Claim 5 further recites a step of match[ing] the suspect facility…with the query vector… which can be implemented in a human mind and/or with the aid of pen/paper similar to the analysis above (e.g., mentally comparing and matching the query vector with names of suspect facility and suspected chamber). It is noted that the matching module is merely functional software component without any specific hardware improvements recited that can be amounted to more than the abstract idea itself. Thus, the claim is ineligible. Claim 6 further recites steps of …convert[ing] the first feature…into a query vector; …search[ing] sample cases to find the similar case…; and derive[ing] the suspected process by masking… which can be implemented in a human mind and/or with the aid of pen/paper similar to the analysis above (e.g., mentally converting the first and second features into a query vector; mentally searching for similar cases; and writing down on paper identifier of the suspected process with masked information). It is noted that all the adapter, retriever, and masking module are merely functional software components without any specific hardware improvements recited that can be amounted to more than the abstract idea itself. Thus, the claim is ineligible. Claim 7 further recites steps of …fuse[ing] the first feature and the second feature… and convert[ing] the fusion…into the query vector… which can be implemented in a human mind and/or with the aid of pen/paper similar to the analysis above (e.g., mentally combining the first and second features, and mentally converting the combined features into a query vector). It is noted that the feed-forward network is merely a functional software component without any specific hardware improvements recited that can be amounted to more than the abstract idea itself. Thus, the claim is ineligible. Claim 8 further recites step of …calculate[ing] a probabilities of candidates… based on the feed-forward network is trained through an inductive bias… which can be implemented in a human mind and/or with the aid of pen/paper similar to the analysis above (e.g., mentally calculating a probability of candidates based on mathematical concepts trained in the human mind using inductive bias). It is noted that the feed-forward network is merely a functional software component without any specific hardware improvements recited that can be amounted to more than the abstract idea itself. Thus, the claim is ineligible. Claim 9 further recites steps of …calculate[ing] a similarity… and search[ing] for the suspected process…which converts the similarity into probabilities… which can be implemented in a human mind and/or with the aid of pen/paper similar to the analysis above (e.g., mentally calculating a similarity, and mentally searching for the suspected process wherein the similarity can be mentally converted into probabilities). Thus, the claim is ineligible. Claim 10 merely defines …the retriever is trained based on cross-entropy… which can be implemented in a human mind and/or with the aid of pen/paper similar to the analysis above (e.g., mentally training based on mathematical concepts including cross-entropy). Thus, the claim is ineligible. Claim 11 further recites step of …collect[ing]…corresponding to unmasked similar case… which is an extra-solution and WURC activity similar to the above analysis (e.g., data collection occurred before or after the mental process). Further, the step of …convert[ing] the collected…in a standardized form… which can be implemented in a human mind and/or with the aid of pen/paper similar to the analysis above (e.g., mentally converting the collected data and writing down on paper the collected data with standardized format). Thus, the claim is ineligible. Claim 12 further recites a step of …generate[ing] the prompt…based on the user’s inquiry… which is an extra-solution and WURC activity similar to the above analysis (e.g., generating output based on processed input). It is noted that the large language model is merely a functional software component without any specific hardware improvements recited that can be amounted to more than the abstract idea itself. Thus, the claim is ineligible Claim 13 merely provides definitions for the first modality comprises text… and the second modality comprises image... Thus, the claim is ineligible. Claim 14 merely provides definitions for the information related to the defect phenomenon comprises: at least one piece of text…; and image information comprising a defect image… Thus, the claim is ineligible. Claim 15 merely provides a definitions for the response corresponding to the user’s inquiry… Thus, the claim is ineligible. Claims 16-20 are also ineligible for the similar reasons presented above in claims 1-15. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 12-13, 15-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shukla et al. (Pub. No. US 2025/0245813, filed on January 25, 2024; hereinafter Shukla) in view of Cheng et al. (Pub. No. US 2018/0299877, published on October 18, 2018; hereinafter Cheng). Regarding claim 1, Shukla clearly shows and discloses an apparatus for recommending a defect-causing process, the apparatus comprising: one or more processors; and memory storing instructions configured to cause the one or more processors (Figure 1) to: receive a user's inquiry comprising identification information related to a defect phenomenon occurring in a target process (Step 700 includes obtaining image data of one or more device components and user input pertaining to at least a portion of the one or more device components. Obtaining user input pertaining to at least a portion of the one or more device components includes obtaining at least one user-provided description of at least one issue associated with the one or more device components, [0071]. Also, it is to be appreciated that the term “user” in this context and elsewhere herein is intended to be broadly construed so as to encompass, for example, human, hardware, software or firmware entities, as well as various combinations of such entities, [0018]); and implement a neural network model configured to search for a similar case related to the defect phenomenon (processing at least a portion of the image data and at least a portion of the user input using one or more deep learning-based image classification techniques can include identifying one or more items of historical image data comprising at least a predetermined level of similarity to one or more portions of the obtained image data, [0072]) by encoding information related to the defect phenomenon based on the identification information (searching historical data and retrieving corresponding and/or relevant historical repair records and/or defect resolution steps, including both text data (e.g., text logs) and image data if available, [0046]. Such text data and image data related to defect resolution can then be input to and/or processed by at least one multimodal LLM, which analyzes such input data to confirm whether the device in question and/or components thereof are defective, as well as to provide details about the defect(s) and repair instructions when necessary, [0047]) and generating a response to the user's inquiry (Based on the information provided, here is an analysis of the issue with Image (I) and the possible resolution…, [0067]) by using a prompt generated based on the user's inquiry and the similar case (The prompt can also include the following: “Similar Image (IS) is the most similar historical image to Image (I), and the user complaint for Similar Image (IS) included a statement that ‘My laptop screen is constantly flickering with horizontal lines appearing across the display, and it has become difficult to use the laptop due to this persistent visual disturbance.’ The resolution provided in the past for this Similar Image (IS) includes a report that states ‘Inspected and reseated connections first with no effect; Checked for overheating issues also; Reinstalled graphics drivers, which helped in eliminating the flickering problem.’”), [0060]-[0065]). Cheng then alternatively or additionally discloses implementing a neural network model configured to search for a similar case related to the defect phenomenon by encoding information related to the defect phenomenon based on the identification information (In block 310, construct, using an invariant model, a fault fingerprint based on a fault event. In block 320, derive, using dynamic time warping and at least one convolution, a similarity matrix between the fault fingerprint and one or more historical representative fingerprints. In block 330, determine a corrective action correlated to the fault fingerprint, from among a plurality of candidate corrective actions associated with the one or more historical representative fingerprints, based on a unity similarity obtained by processing the similarity matrix, [0030]). It would have been obvious to an ordinary person skilled in the art at the time of the effective filing date to incorporate the teachings of Cheng with the teachings of Shukla for the purpose of constructing a fault signature based on a detected fault event and determining matching corrective action correlated to the fault signature to mitigate undesirable outcomes associated with the detected fault event. Regarding claim 2, Shukla further discloses the neural network model comprises: an encoder module configured to extract a first feature corresponding to a first modality and/or a second feature corresponding to a second modality by encoding the information related to the defect phenomenon based on the identification information (Such text data and image data related to defect resolution can then be input to and/or processed, [0047]. Images converted into corresponding vectors of historical defective displays, [0053]. Determining and/or identifying user interaction logs and/or defect resolution logs pertaining to the Similar Image (IS), and denoting user interaction logs and/or defect resolution logs as Similar Logs (LS), [0054]); a matching module configured to search for the similar case, the similar case comprising a suspected process, a suspected facility and/or a suspected chamber that match a query vector, through the query vector, based on the first and/or second feature (Similar Logs (LS), might include the following: “Inspected and reseated connections first with no effect, then checked for overheating issues before reinstalling graphics drivers, which helped in eliminating the flickering problem.”, [0055]. Historical repair logs and/or resolution steps associated with a dead pixel and/or horizontal line issue similar to that depicted in FIG. 6 can include a historical resolution summarization as follows: “The description suggests that the resistance temperature detector (RTD) in the liquid crystal display (LCD) has dead pixels which may be caused by software issues or manufacturing defects. To fix the issue, I ran an LCD built-in self-test (BIST) test and checked for any software updates or patches that may help resolve the problem, and it appears that a display drivers update fixed the issue.”, [0059]); a paraphrasing module configured to generate the prompt based on the user's inquiry and the similar case (The prompt can also include the following: “Similar Image (IS) is the most similar historical image to Image (I), and the user complaint for Similar Image (IS) included a statement that ‘My laptop screen is constantly flickering with horizontal lines appearing across the display, and it has become difficult to use the laptop due to this persistent visual disturbance.’, [0060]-[0062]); and a large language model configured to generate the response corresponding to the user's inquiry by using the prompt (a given multimodal LLM can process the above prompt and generate an output that includes the following: “Based on the information provided, here is an analysis of the issue with Image (I) and the possible resolution: The user describes the problem as ‘unusual colors and lines' that make it difficult to use the monitor. This issue is similar to the problem described in the historical image Similar Image (IS) as ‘unusual horizontal lines that will not go away.’ For Similar Image (IS), the issue was diagnosed as the RTD-LCD having dead pixels and was initially described as ‘unusual horizontal lines that will not go away.’, [0067]). Regarding claim 3, Shukla further discloses the encoder module comprises at least one of: a preprocessor configured to convert the information related to the defect phenomenon into a form for the encoding, based on the identification information; a first encoder configured to extract the first feature from the converted information related to the defect phenomenon; or a second encoder configured to extract the second feature from the converted information related to the defect phenomenon (searching historical data and retrieving corresponding and/or relevant historical repair records and/or defect resolution steps, including both text data (e.g., text logs) and image data if available, [0046]. Such text data and image data related to defect resolution can then be input to and/or processed by at least one multimodal LLM, which analyzes such input data to confirm whether the device in question and/or components thereof are defective, as well as to provide details about the defect(s) and repair instructions when necessary, [0047]). Regarding claim 4, Shukla further discloses the encoder module comprises a transformer-based encoder network (utilizing a pretrained model such as a pretrained convolutional neural network (e.g., VGG16) to transform at least one image of a defective device component (e.g., a defective display part) into at least one vector.), [0044]. See [0040] for bidirectional encoder representations from transformers (BERT)). Regarding claim 5, Shukla then discloses the matching module is configured to match the suspect facility and the suspected chamber, corresponding to the suspected process (Similar Logs (LS), might include the following: “Inspected and reseated connections first with no effect, then checked for overheating issues before reinstalling graphics drivers, which helped in eliminating the flickering problem.”, [0055]. Historical repair logs and/or resolution steps associated with a dead pixel and/or horizontal line issue similar to that depicted in FIG. 6 can include a historical resolution summarization as follows: “The description suggests that the resistance temperature detector (RTD) in the liquid crystal display (LCD) has dead pixels which may be caused by software issues or manufacturing defects. To fix the issue, I ran an LCD built-in self-test (BIST) test and checked for any software updates or patches that may help resolve the problem, and it appears that a display drivers update fixed the issue.”, [0059]), with the query vector, based on production information (utilizing a pretrained model such as a pretrained convolutional neural network (e.g., VGG16) to transform at least one image of a defective device component (e.g., a defective display part) into at least one vector. This vector is then compared to historical defective device components, in vector format, using one or more similarity measures (e.g., cosine similarity), and images with a similarity measure above a predetermined threshold value are utilized, along with their corresponding repair and/or resolution log information and user interaction information, [0044]). Regarding claim 12, Shukla further discloses the paraphrasing module is configured to generate the prompt for the large language model based on the user's inquiry, the suspected process, the suspected facility, and/or the suspected chamber (below is an example prompt that can be generated and provided as input to a multimodal LLM: “You are a repair technician who fixes LCD issues in a repair depot and one of your objectives is to find a solution such that LCDs are not scrapped if it is possible to repair them. Consider Image (I), which is of a defective screen which was brought in for repair today by a user complaining of a significant and persistent visual distortion issue. The user describes the problem as unusual colors and lines that make it difficult to use the monitor.”, [0061]-[[065]). Regarding claim 13, Shukla further discloses the first modality comprises text information, and the second modality comprises image information (Such text data and image data related to defect resolution can then be input to and/or processed, [0047]). Regarding claim 15, Shukla further discloses the response corresponding to the user's inquiry comprises: a response reflecting the user's inquiry, the suspected process, the suspected facility, the suspected chamber, and/or the at least one similar case (a given multimodal LLM can process the above prompt and generate an output that includes the following: “Based on the information provided, here is an analysis of the issue with Image (I) and the possible resolution: The user describes the problem as ‘unusual colors and lines' that make it difficult to use the monitor. This issue is similar to the problem described in the historical image Similar Image (IS) as ‘unusual horizontal lines that will not go away.’ For Similar Image (IS), the issue was diagnosed as the RTD-LCD having dead pixels and was initially described as ‘unusual horizontal lines that will not go away.’, [0067]). Regarding claim 16, Shukla clearly shows and discloses a method of recommending a defect-causing process performed by one or more processors (Abstract), the method comprising: receiving a user's inquiry comprising identification information related to a defect phenomenon occurring in a target process (Step 700 includes obtaining image data of one or more device components and user input pertaining to at least a portion of the one or more device components. Obtaining user input pertaining to at least a portion of the one or more device components includes obtaining at least one user-provided description of at least one issue associated with the one or more device components, [0071]. Also, it is to be appreciated that the term “user” in this context and elsewhere herein is intended to be broadly construed so as to encompass, for example, human, hardware, software or firmware entities, as well as various combinations of such entities, [0018]); extracting a first feature corresponding to a first modality and/or a second feature corresponding to a second modality by encoding the information related to the defect phenomenon based on the identification information (Such text data and image data related to defect resolution can then be input to and/or processed, [0047]. Images converted into corresponding vectors of historical defective displays, [0053]. Determining and/or identifying user interaction logs and/or defect resolution logs pertaining to the Similar Image (IS), and denoting user interaction logs and/or defect resolution logs as Similar Logs (LS), [0054]); determine a query vector from the first and/or second feature (images converted into corresponding vectors of historical defective displays, [0053]); searching for a similar case comprising a suspected process, a suspected facility, and/or a suspected chamber, which matches the query vector (Similar Logs (LS) corresponding to Similar Image (IS) 500 might include a user complaint such as the following: “My laptop screen is constantly flickering with horizontal lines appearing across the display, and it has become nearly impossible to use the laptop due to this persistent visual disturbance.”, [0055]. Historical repair logs and/or resolution steps associated with a dead pixel and/or horizontal line issue similar to that depicted in FIG. 6 can include a historical resolution summarization as follows: “The description suggests that the resistance temperature detector (RTD) in the liquid crystal display (LCD) has dead pixels which may be caused by software issues or manufacturing defects. To fix the issue, I ran an LCD built-in self-test (BIST) test and checked for any software updates or patches that may help resolve the problem, and it appears that a display drivers update fixed the issue.”, [0059]); generating a response to the user's inquiry (Based on the information provided, here is an analysis of the issue with Image (I) and the possible resolution…, [0067]) by using a prompt generated based on the user's inquiry and the similar case (The prompt can also include the following: “Similar Image (IS) is the most similar historical image to Image (I), and the user complaint for Similar Image (IS) included a statement that ‘My laptop screen is constantly flickering with horizontal lines appearing across the display, and it has become difficult to use the laptop due to this persistent visual disturbance.’ The resolution provided in the past for this Similar Image (IS) includes a report that states ‘Inspected and reseated connections first with no effect; Checked for overheating issues also; Reinstalled graphics drivers, which helped in eliminating the flickering problem.’”), [0060]-[0065]). Cheng then alternatively or additionally discloses: determine a query vector from the first and/or second feature (In block 310, construct, using an invariant model, a fault fingerprint based on a fault event. In block 320, derive, using dynamic time warping and at least one convolution, a similarity matrix between the fault fingerprint and one or more historical representative fingerprints, [0030]. Block 730 may feed in to block 740, with block 740 constructing the fault signature matrix. Block 740 may construct a temporal and spatial signature matrix by encoding which pair of components (x-axis) and at which time point is broken (y-axis). Block 740 may transform the feature matrix to a feature vector by using either summing up the values over time dimension or a logic union over time dimension, [0048]); searching for a similar case which matches the query vector (In block 330, determine a corrective action correlated to the fault fingerprint, from among a plurality of candidate corrective actions associated with the one or more historical representative fingerprints, based on a unity similarity obtained by processing the similarity matrix, [0030]. Block 760 may feed into block 780, which can suggest the historical action, [0052]). It would have been obvious to an ordinary person skilled in the art at the time of the effective filing date to incorporate the teachings of Cheng with the teachings of Shukla for the purpose of constructing a fault signature based on a detected fault event and determining matching corrective action correlated to the fault signature to mitigate undesirable outcomes associated with the detected fault event. Regarding claim 17, Shukla further discloses the extracting the first and/or second feature comprises: preprocessing to convert the information related to the defect phenomenon into a form for the encoding, based on the identification information; extracting the first feature from the converted information related to the defect phenomenon; and extracting the second feature from the converted information related to the defect phenomenon (searching historical data and retrieving corresponding and/or relevant historical repair records and/or defect resolution steps, including both text data (e.g., text logs) and image data if available, [0046]. Such text data and image data related to defect resolution can then be input to and/or processed by at least one multimodal LLM, which analyzes such input data to confirm whether the device in question and/or components thereof are defective, as well as to provide details about the defect(s) and repair instructions when necessary, [0047]). Regarding claim 18, Shukla further discloses the searching for the similar case comprises: converting the first and/or second feature into the query vector (images converted into corresponding vectors of historical defective displays, [0053]); and searching for the similar case, the similar case comprising the suspected process that matches the query vector (Similar Logs (LS) corresponding to Similar Image (IS) 500 might include a user complaint such as the following: “My laptop screen is constantly flickering with horizontal lines appearing across the display, and it has become nearly impossible to use the laptop due to this persistent visual disturbance.”, [0055]). Cheng then alternatively or additionally discloses: converting the first and/or second feature into the query vector (In block 310, construct, using an invariant model, a fault fingerprint based on a fault event. In block 320, derive, using dynamic time warping and at least one convolution, a similarity matrix between the fault fingerprint and one or more historical representative fingerprints, [0030]. Block 730 may feed in to block 740, with block 740 constructing the fault signature matrix. Block 740 may construct a temporal and spatial signature matrix by encoding which pair of components (x-axis) and at which time point is broken (y-axis). Block 740 may transform the feature matrix to a feature vector by using either summing up the values over time dimension or a logic union over time dimension, [0048]); and searching for the similar case, the similar case matches the query vector (In block 330, determine a corrective action correlated to the fault fingerprint, from among a plurality of candidate corrective actions associated with the one or more historical representative fingerprints, based on a unity similarity obtained by processing the similarity matrix, [0030]. Block 760 may feed into block 780, which can suggest the historical action, [0052]). Regarding claim 20, Shukla clearly shows and discloses a method of recommending a defect-causing process, the method performed by one or more processors (Abstract) and comprising: receiving a user's inquiry comprising identification information related to a defect phenomenon occurring in a target process (Step 700 includes obtaining image data of one or more device components and user input pertaining to at least a portion of the one or more device components. Obtaining user input pertaining to at least a portion of the one or more device components includes obtaining at least one user-provided description of at least one issue associated with the one or more device components, [0071]. Also, it is to be appreciated that the term “user” in this context and elsewhere herein is intended to be broadly construed so as to encompass, for example, human, hardware, software or firmware entities, as well as various combinations of such entities, [0018]); converting information related to the defect phenomenon comprised in a defect log into a form for encoding, the information related to the defect phenomenon obtained based on the identification information (searching historical data and retrieving corresponding and/or relevant historical repair records and/or defect resolution steps, including both text data (e.g., text logs) and image data if available, [0046]. Such text data and image data related to defect resolution can then be input to and/or processed by at least one multimodal LLM, which analyzes such input data to confirm whether the device in question and/or components thereof are defective, as well as to provide details about the defect(s) and repair instructions when necessary, [0047]); extracting a first feature corresponding to a first modality comprised in the converted information related to the defect phenomenon and/or a second feature corresponding to a second modality comprised in the converted information related to the defect phenomenon (Such text data and image data related to defect resolution can then be input to and/or processed, [0047]. Images converted into corresponding vectors of historical defective displays, [0053]. Determining and/or identifying user interaction logs and/or defect resolution logs pertaining to the Similar Image (IS), and denoting user interaction logs and/or defect resolution logs as Similar Logs (LS), [0054]); converting the first and/or second feature into a query vector (images converted into corresponding vectors of historical defective displays, [0053]); searching for the similar case, the similar case comprising a suspected process that matches the query vector (Similar Logs (LS) corresponding to Similar Image (IS) 500 might include a user complaint such as the following: “My laptop screen is constantly flickering with horizontal lines appearing across the display, and it has become nearly impossible to use the laptop due to this persistent visual disturbance.”, [0055]); matching a suspected facility and a suspected chamber, corresponding to the defect phenomenon (Similar Logs (LS), might include the following: “Inspected and reseated connections first with no effect, then checked for overheating issues before reinstalling graphics drivers, which helped in eliminating the flickering problem.”, [0055]. Historical repair logs and/or resolution steps associated with a dead pixel and/or horizontal line issue similar to that depicted in FIG. 6 can include a historical resolution summarization as follows: “The description suggests that the resistance temperature detector (RTD) in the liquid crystal display (LCD) has dead pixels which may be caused by software issues or manufacturing defects. To fix the issue, I ran an LCD built-in self-test (BIST) test and checked for any software updates or patches that may help resolve the problem, and it appears that a display drivers update fixed the issue.”, [0059]), based on production information corresponding to the similar case (utilizing a pretrained model such as a pretrained convolutional neural network (e.g., VGG16) to transform at least one image of a defective device component (e.g., a defective display part) into at least one vector. This vector is then compared to historical defective device components, in vector format, using one or more similarity measures (e.g., cosine similarity), and images with a similarity measure above a predetermined threshold value are utilized, along with their corresponding repair and/or resolution log information and user interaction information, [0044]); generating a prompt for a large language model based on the user's inquiry, the suspected process, the suspected facility, and/or the suspected chamber (The prompt can also include the following: “Similar Image (IS) is the most similar historical image to Image (I), and the user complaint for Similar Image (IS) included a statement that ‘My laptop screen is constantly flickering with horizontal lines appearing across the display, and it has become difficult to use the laptop due to this persistent visual disturbance.’ The resolution provided in the past for this Similar Image (IS) includes a report that states ‘Inspected and reseated connections first with no effect; Checked for overheating issues also; Reinstalled graphics drivers, which helped in eliminating the flickering problem.’”), [0060]-[0065]); and generating a response corresponding to the user's inquiry by using the prompt (Based on the information provided, here is an analysis of the issue with Image (I) and the possible resolution…, [0067]). Cheng then alternatively or additionally discloses: converting the first and/or second feature into a query vector (In block 310, construct, using an invariant model, a fault fingerprint based on a fault event. In block 320, derive, using dynamic time warping and at least one convolution, a similarity matrix between the fault fingerprint and one or more historical representative fingerprints, [0030]. Block 730 may feed in to block 740, with block 740 constructing the fault signature matrix. Block 740 may construct a temporal and spatial signature matrix by encoding which pair of components (x-axis) and at which time point is broken (y-axis). Block 740 may transform the feature matrix to a feature vector by using either summing up the values over time dimension or a logic union over time dimension, [0048]); searching for the similar case, the similar case matches the query vector (In block 330, determine a corrective action correlated to the fault fingerprint, from among a plurality of candidate corrective actions associated with the one or more historical representative fingerprints, based on a unity similarity obtained by processing the similarity matrix, [0030]. Block 760 may feed into block 780, which can suggest the historical action, [0052]). It would have been obvious to an ordinary person skilled in the art at the time of the effective filing date to incorporate the teachings of Cheng with the teachings of Shukla for the purpose of enhancing document search and retrieval using a multimodal detection model that combines features from input texts and images to identify one or more similar documents with matching features to the inputted data. Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Shukla in view of Cheng in view of Java et al. (Pub. No. US 2025/0005048, filed on June 30, 2023; hereinafter Java). Regarding claim 6, Java then discloses the matching module comprises: an adapter configured to convert the first feature and the second feature into the query vector (Extracting, by a plurality of encoders, the first multi-modal features from the query snippet and the second multi-modal features from the target document. As discussed, the document search system may include a feature extractor that includes multiple encoders, each corresponding to a different modality being encoded. In some embodiments, the plurality of encoders includes one or more of a text encoder, an image encoder, and a layout encoder, [0080]. As illustrated in FIG. 8, the method 800 includes an act 804 of combining, by a multi-modal snippet detection model, first multi-modal features from the query snippet and second multi-modal features from the target document to create a feature volume, [0081]); a retriever configured to search sample cases to find the similar case, the similar case comprising the suspected process that matches the query vector (the output of the co-attention and cross-attention modules are 2D vector representations (e.g., encoded representations) which are then combined to form a 3D feature volume. The feature volume can then be used to identify candidate snippets of the target document that match the query snippet. As discussed further below, embodiments use a new model architecture that enables the fusion of multi-modal inputs, which results in more accurate snippet detection in documents, [0021]); and a masking module configured to derive the suspected process by masking, based on production information, some of the sample cases (Similar snippets can be identified using a similarity criterion editqt based on the edit distance (e.g., Levenshtein distance), [0041]). It would have been obvious to an ordinary person skilled in the art at the time of the effective filing date to incorporate the teachings of Java with the teachings of Shukla, as modified by Cheng, for the purpose of enhancing document search and retrieval corresponding to input data using a multimodal detection model that combines features from texts and images to identify one or more similar documents with matching features to the input data. Regarding claim 7, Java further discloses the adapter is further configured to fuse the first feature and the second feature through a feed-forward network and convert the fusion of the first and second features into the query vector (As shown in FIG. 4, feature fusion manager 110 includes co-attention module 409 and cross-attention module 411. Co-attention module 409 and cross-attention module 411 may be implemented as transformer networks which combine like features (e.g., in the co-attention module) and unlike features (e.g., in the cross-attention module). The resulting combined features are then fused to create a feature volume which can be used to predict matching snippets in the target document, [0050]). Regarding claim 8, Shukla further discloses the feed-forward network is trained through an inductive bias that reflects the knowledge of an expert in the target process (at least one deep learning-based image classification model (e.g., one or more convolutional neural network models) trained to identify the type(s) of defect in an input image (e.g., cracks, lines, pixelation, etc.), [0045]. It is clear that convolutional neural network, such as VGG16, is a form of inductive bias trained based on prior knowledge to identify similar datasets), and configured to calculate a probabilities of candidates of the suspected process (utilizing a pretrained model such as a pretrained convolutional neural network (e.g., VGG16) to transform at least one image of a defective device component (e.g., a defective display part) into at least one vector. This vector is then compared to historical defective device components, in vector format, using one or more similarity measures (e.g., cosine similarity), and images with a similarity measure above a predetermined threshold value are utilized, along with their corresponding repair and/or resolution log information and user interaction information, [0044]). Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Shukla in view of Cheng in view of Java and further in view of Bertrand et al. (Pub. No. US 2025/0371843, filed on February 23, 2023; hereinafter Bertrand). Regarding claim 9, Bertrand then discloses the retriever is configured to calculate a similarity between the query vector and the similar case based on a scaled dot-product attention (In the scaled dot-product attention module, the input includes queries and keys of dimension dk, and values of dimension dv. The scaled dot-product attention module 704 computes dot products of the query with all keys, and applies a softmax function to obtain weights on the values, [0080]-[0081]), and search for the suspected process through non-parametric classification, which converts the similarity into probabilities of suspected processes similar to the query vector (perform matching-based approaches that do not use a parametric classifier. Instead, these models perform pair-wise matching between the query and the support examples of each class to obtain class probabilities. Inference is performed in a k-nearest-neighbor classification manner were k is an integer greater than or equal to 1. These models may be referred to as non-parametric and as matching based, [0089]). It would have been obvious to an ordinary person skilled in the art at the time of the effective filing date to incorporate the teachings of Bertrand with the teachings of Shukla, as modified by Cheng and Java, for the purpose of enhancing matching-based action recognition utilizing a machine learning model trained on prior validated action and datasets to increase accuracy of searching and retrieval of desired data. Regarding claim 10, Bertrand further discloses the retriever is trained based on cross-entropy corresponding to an occurrence probabilities of suspected processes similar to the query vector (Optimization may be performed with cross-entropy loss for classification. The backbone and classifier may be jointly trained by the training module 304 during this stage, with C equal to the number of all classes in the meta-train set, [0093]). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Shukla in view of Cheng in view of Java and further in view of Marum et al. (Pub. No. US 2011/0161938, published on June 30, 2011; hereinafter Marum). Regarding claim 11, Shukla then discloses a data frame module configured to collect the suspected process, the suspected facility, and the suspected chamber corresponding to an unmasked similar case (Based at least in part on the device component defects predicted using image classification techniques and at least one object detection model, such an embodiment includes searching historical data and retrieving corresponding and/or relevant historical repair records and/or defect resolution steps, including both text data (e.g., text logs) and image data if available, [0046]). Marum then discloses converting the collected result data [suspected process, facility, and chamber] into information in a standardized form (The defect content 136 can be searched (engine 122) and analyzed (engine 124) to produce quality reports (engine 126). Because the defect content 136 is directed stored with the source code 134, the defect content is automatically aggregated across multiple defect or change tracking systems. That is, it does not matter which change tracking system originally generated defect content 136 or what format the original content 136 was in, since it is recorded in the source code 134 in a standardized format usable by engines 122-126, [0021]). It would have been obvious to an ordinary person skilled in the art at the time of the effective filing date to incorporate the teachings of Marum with the teachings of Shukla, as modified by Cheng and Java, for the purpose of assuring quality of a product environment by storing defect content which can be searched and a quality report can be produced based matching defect content and their respective resolutions. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Shukla in view of Cheng and further in view of Smith et al. (Pub. No. US 2003/0061212, published on March 27, 2003; hereinafter Smith). Regarding claim 14, Shukla then discloses the information related to the defect phenomenon comprises: at least one piece of text information of an inspection step related to the target process, and production information; and image information comprising a defect image corresponding to the defect phenomenon or a pattern of a defect map corresponding to the defect phenomenon (searching historical data and retrieving corresponding and/or relevant historical repair records and/or defect resolution steps, including both text data (e.g., text logs) and image data if available, [0046]. Such text data and image data related to defect resolution, [0047]). Smith then discloses at least one piece of text information of LOT information related to the target process, wafer information, a defect-type code (FIG. 24 shows an example of a defect data file that is produced, for example, by a defect inspection tool or a defect review tool in a fab. In particular, such a file typically includes information relating to x and y coordinates, x and y die coordinates, size, defect type classification code, and image information of each defect on a wafer. Data Conversion module 3020 translates this defect data file into a matrix comprising sizing, classification (for example, defect type), and defect density on a die level, [0182]). It would have been obvious to an ordinary person skilled in the art at the time of the invention was effectively filed to incorporate the teachings of Smith with the teachings of Shukla, as modified by Cheng, for the purpose of enhance defect diagnosis based on extracted portions of data in response to a user specified analysis input to enable accurate resolution to the defect. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Shukla in view of Cheng in view of Bertrand. Regarding claim 19, Shukla further discloses the searching for the similar case comprises: matching the suspect facility and the suspected chamber, corresponding to the suspected process (Similar Logs (LS), might include the following: “Inspected and reseated connections first with no effect, then checked for overheating issues before reinstalling graphics drivers, which helped in eliminating the flickering problem.”, [0055]. Historical repair logs and/or resolution steps associated with a dead pixel and/or horizontal line issue similar to that depicted in FIG. 6 can include a historical resolution summarization as follows: “The description suggests that the resistance temperature detector (RTD) in the liquid crystal display (LCD) has dead pixels which may be caused by software issues or manufacturing defects. To fix the issue, I ran an LCD built-in self-test (BIST) test and checked for any software updates or patches that may help resolve the problem, and it appears that a display drivers update fixed the issue.”, [0059]), with the query vector, based on production information (utilizing a pretrained model such as a pretrained convolutional neural network (e.g., VGG16) to transform at least one image of a defective device component (e.g., a defective display part) into at least one vector. This vector is then compared to historical defective device components, in vector format, using one or more similarity measures (e.g., cosine similarity), and images with a similarity measure above a predetermined threshold value are utilized, along with their corresponding repair and/or resolution log information and user interaction information, [0044]). Bertrand then discloses: calculating a similarity between the query vector and the similar case based on a scaled dot-product attention (In the scaled dot-product attention module, the input includes queries and keys of dimension dk, and values of dimension dv. The scaled dot-product attention module 704 computes dot products of the query with all keys, and applies a softmax function to obtain weights on the values, [0080]-[0081]); searching for the suspected process through non-parametric classification, which converts the similarity into a probability of suspected processes similar to the query vector (perform matching-based approaches that do not use a parametric classifier. Instead, these models perform pair-wise matching between the query and the support examples of each class to obtain class probabilities. Inference is performed in a k-nearest-neighbor classification manner were k is an integer greater than or equal to 1. These models may be referred to as non-parametric and as matching based, [0089]). It would have been obvious to an ordinary person skilled in the art at the time of the effective filing date to incorporate the teachings of Bertrand with the teachings of Shukla, as modified by Cheng, for the purpose of enhancing matching-based action recognition utilizing a machine learning model trained on prior validated action and datasets to increase accuracy of searching and retrieval of desired data. Relevant Prior Art The following references are considered relevant to the claims: Krishnan et al. (Pub. No. US 2024/0248901) teaches a query representation model may include a text representation model and one or more visual representation models for different types of visual inputs (e.g., images, icons, illustrations, templates, etc.). To enable processing of multimodal input queries such as templates, the query representation model may also include a parsing unit for parsing such multimodal inputs into the different types of content that make up the multimodal document. In an example, each type of input query is converted to a multi-dimensional vector space. The query representation model encodes the search query in a similar manner as that of the asset representation models such that the query representations correspond to the embedding representations of the asset representation library. Nguyen et al. (Pub. No. US 2010/0131450) teaches automatically classifying defects based on the steps of (A) receiving information for a current defect, (B) extracting field values from the current defect, (C) counting a number of occurrences of one or more keywords in the current defect, (D) determining one or more new keywords occurring in the current defect and storing the one or more new keywords in a database and (E) creating one or more linkages in the database between a first record corresponding to the current defect and one or more second records corresponding to previous defects based upon one or more similarities between the first and the second records. Contact Information Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Son T. Hoang whose telephone number is (571) 270-1752. The Examiner can normally be reached on Monday – Friday (7:00 AM – 4:00 PM). If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Sherief Badawi can be reached on (571) 272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SON T HOANG/Primary Examiner, Art Unit 2169 December 13, 2025
Read full office action

Prosecution Timeline

Sep 06, 2024
Application Filed
Dec 13, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591561
ACCESSING A PRIMARY CLUSTERY KEY INDEX STRUCTURE DURING QUERY EXECUTION
2y 5m to grant Granted Mar 31, 2026
Patent 12566762
Space Efficient Technique For Estimating Cardinality Using Probabilistic Data Structure
2y 5m to grant Granted Mar 03, 2026
Patent 12561337
SYSTEM AND METHOD FOR PATENT AND PRIOR ART ANALYSIS
2y 5m to grant Granted Feb 24, 2026
Patent 12554720
PREDICATE TRANSFER PRE-FILTERING ON MULTI-JOIN QUERIES
2y 5m to grant Granted Feb 17, 2026
Patent 12554766
ACCESS POINTS FOR MAPS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+35.0%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 905 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month