Prosecution Insights
Last updated: April 19, 2026
Application No. 18/747,764

LARGE LANGUAGE MODEL ASSISTANCE FOR CHARGED-PARTICLE MICROSCOPE OPERATION

Non-Final OA §103§DP
Filed
Jun 19, 2024
Examiner
LEE, EUNICE SOMIN
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Fei Company
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
24 granted / 27 resolved
+26.9% vs TC avg
Strong +27% interview lift
Without
With
+27.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
20 currently pending
Career history
47
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
53.0%
+13.0% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
2.7%
-37.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 27 resolved cases

Office Action

§103 §DP
DETAILED ACTION This communication is in response to the Application filed on June 19, 2024. Claims 1 - 20 are pending and have been examined. Claims 1, 9 and 17 are independent. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on August 13, 2024 and February 5, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings filed on June 19, 2024 have been accepted and considered by the Examiner. 35 U.S.C. 112(f) Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “state component” and “model component” in Claim 1, “presenter component” in Claim 2, “access component” in Claim 4, and “context component” in Claim 6. Note the varied definition of this phrase in the supporting Specification which indicates that the “component” was intended as a generic placeholder. Based on the Specification, this refers to a large number of options: “[0151] … Note that, in various instances, the access component 322, the state component 324, the context component 326, the model component 328, and the presenter component 330 can collectively be considered as being one or more software components 321 of the system 316. In various aspects, it should be appreciated that the one or more software components 321 are described primarily herein as comprising five components (e.g., the access component 322, the state component 324, the context component 326, the model component 328, and the presenter component 330) for ease of explanation and illustration. However, the one or more software components 321 are not limited to being implemented as exactly such five components in every embodiment. Indeed, in some embodiments, the functionalities described herein of such five components can be combined in any suitable fashions, so as to be implemented in or by fewer than five components (e.g., in some cases, a single component can perform all of the functionalities that are described herein with respect to the access component 322, the state component 324, the context component 326, the model component 328, and the presenter component 330). In other embodiments, the functionalities described herein of such five components can instead be distributed, separated, split, or fragmented in any suitable fashions, so as to be implemented in or by more than five components (e.g., two or more components can facilitate the functionalities that are performable by the access component 322; two or more components can facilitate the functionalities that are performable by the state component 324; two or more components can facilitate the functionalities that are performable by the context component 326; two or more components can facilitate the functionalities that are performable by the model component 328; two or more components can facilitate the functionalities that are performable by the presenter component 330).” These limitations are generic in the context of the art and don’t refer to any specific structure and only serve as placeholders for the structure that performs the associated function(s) without providing any information about what that structure is. MPEP 2181 I A says: For a term to be considered a substitute for "means," and lack sufficient structure for performing the function, it must serve as a generic placeholder and thus not limit the scope of the claim to any specific manner or structure for performing the claimed function. It is important to remember that there are no absolutes in the determination of terms used as a substitute for "means" that serve as generic placeholders. The examiner must carefully consider the term in light of the specification and the commonly accepted meaning in the technological art. Every application will turn on its own facts. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Nonstatutory Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1 - 2, 6, 9 -10, 15, 17 - 18, and 20 of the instant Application are provisionally rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claims 1 - 20 of copending application 18/662,561 (hereinafter ‘561). Regarding independent claims 1, 9, and 17, the conflicting claims are not identical to corresponding claims 2, 10 and 17 of the copending application because the claims of copending application ‘561 require the additional limitation, not required by claims 1, 9 and 17 of the instant Application. However, the conflicting claims are not patentably distinct from each other because: (1) claims 1, 9 and 17 of the instant Application and claims 2, 10 and 17 of the copending application recite common subject matter, and (2) whereby the elements of claims 1 , 9 and 17 of instant Application are fully anticipated by claims 2, 10 and 17 of the copending application, and anticipation is “the ultimate or epitome of obviousness” (In re Kalm, 154 USPQ 10 (CCPA 1967), also In re Daily, 178 USPQ 293 (CCPA 1973) and In re Pearson, 181 USPQ 641 (CCPA 1974)). This is a provisional nonstatutory double patenting rejection. Dependent claims 2, 6, 10, 15, 18 and 20 are also similarly analyzed and rejected over claims 1 - 20 of the copending application ‘561. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claims 1 - 3, 9 - 11, and 17 of the instant Application are provisionally rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claims 1 - 20 of copending application 18/747,753 (hereinafter ’753). Regarding independent claims 1, 9, and 17, the conflicting claims are not identical to corresponding claims 3, 11 and 17 of the copending application because the claims of copending application ‘753 require the additional limitation, not required by claims 1, 9 and 17 of the instant Application. However, the conflicting claims are not patentably distinct from each other because: (1) claims 1, 9 and 17 of the instant Application and claims 3, 11 and 17 of the copending application recite common subject matter, and (2) whereby the elements of claims 1 , 9 and 17 of instant Application are fully anticipated by claims 3, 11 and 17 of the copending application, and anticipation is “the ultimate or epitome of obviousness” (In re Kalm, 154 USPQ 10 (CCPA 1967), also In re Daily, 178 USPQ 293 (CCPA 1973) and In re Pearson, 181 USPQ 641 (CCPA 1974)). This is a provisional nonstatutory double patenting rejection. Dependent claims 2 - 3 and 10 - 11 are also similarly analyzed and rejected over claims 1 - 20 of the copending application ‘753. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 1 - 2, 9 - 10, 17 are rejected under 35 U.S.C. 103(a) as being unpatentable over Ishikawa et al., (WO2025027758A1), hereinafter referred to as Ishikawa, in view of Amthor et al., (U.S. Patent Application Publication 2025/0102788), hereinafter referred to as Amthor. Regarding Claims 1 and 9, Ishikawa teaches: 1. A system, comprising, and 9. A computer-implemented method, comprising: a processor that executes computer-executable components stored in a non-transitory computer-readable memory, wherein the computer-executable components comprise: [Ishikawa, “The computer system 101 includes one or more processors 106 (including at least one of a CPU and a GPU), a memory 107, and a user interface 108.” Par. 0011; “In addition to the above, memory 107 (e.g., a non-transitory computer-readable storage medium) may store, for example: (a) an operating program executed by one or more processors included in semiconductor evaluation tool 102;” Par. 0019] a state component that causes a charged-particle microscope to capture, according to a default microscopy protocol, an image or an energy spectrum of a specimen that is currently loaded on a stage of the charged-particle microscope; and [Ishikawa, “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform (i.e., the claimed “energy spectrum”) based on the detection of secondary electrons and backscattered electrons emitted from a sample (i.e., the claimed “specimen”) when the sample (i.e., the claimed “specimen”) is irradiated with an electron beam.” Par. 0012; “This allows the semiconductor LLM 232 to output a response that reflects the domain knowledge when an input statement relating to an evaluation apparatus or semiconductor device is given.” Par. 0043; “an observed image of a semiconductor device acquired by the evaluation device is abnormal, and measures to remedy the abnormality.” Par. 0050; “The purpose of this disclosure is to input a desired operation (i.e., the claimed “capture, according to a default microscopy protocol, an image or an energy spectrum of a specimen that is currently loaded on a stage of the charged-particle microscope”) of a semiconductor device evaluation device (semiconductor evaluation tool 102 in FIG. 1) to the LLM, and to obtain an operation specification (evaluation recipe) (i.e., the claimed “default microscopy protocol”) that can realize the request as a response from the LLM.” Par. 0030] a model component that executes a large language model on the image or energy spectrum of the specimen, [Ishikawa, “The purpose of this disclosure is to input a desired operation of a semiconductor device evaluation device (semiconductor evaluation tool 102 in FIG. 1) to the LLM (i.e., the claimed “large language model”), and to obtain an operation specification (evaluation recipe) that can realize the request as a response from the LLM.” Par. 0030; “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform (i.e., the claimed “energy spectrum”) based on the detection of secondary electrons and backscattered electrons emitted from a sample (i.e., the claimed “specimen”) when the sample (i.e., the claimed “specimen”) is irradiated with an electron beam.” Par. 0012; “an observed image of a semiconductor device acquired by the evaluation device is abnormal, and measures to remedy the abnormality.” Par. 0050] thereby yielding synthesized code that defines a graphical user-interface for the charged-particle microscope that is tailored to the specimen. [Ishikawa, “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform (i.e., the claimed “energy spectrum”) based on the detection of secondary electrons and backscattered electrons emitted from a sample (i.e., the claimed “tailored to the specimen”) when the sample (i.e., the claimed “tailored to the specimen”) is irradiated with an electron beam.” Par. 0012; “Alternatively, if one has learned a specific programming language, they can describe a desired function in natural language and input it into the general-purpose LLM 210, which can output source code (i.e., the claimed “yield synthesized code”) that implements that function.” Par. 0029; “The user interface 108 (i.e., the claimed “graphical user interface”) includes a display (not shown) and one or more input devices. The display can display image data output from the semiconductor evaluation tool 102, information output from the processor 106 (i.e., the claimed “yielding synthesized code that defines a graphical user-interface, and the like.” Par. 0011] Amthor also teaches: a model component that executes a large language model on the image or energy spectrum of the specimen, thereby yielding synthesized code that defines a graphical user-interface for the charged-particle microscope that is tailored to the specimen. [Amthor, “Holburn, D., et al., “Voice Control of the Scanning Electron Microscope Using a Low-Cost Virtual Assistant”, Microsc. Microanal. 27 (Suppl 1), 2021, doi: 10.1017/S1431927621009685. A user can give voice commands here such as “Autofocus”, “Capture image”, “Move x-axis by 100 steps”, whereupon the microscope implements these commands accordingly.” Par. 0004; “In principle, a structure or object depicted in microscope images and overview images can be any structure or object (i.e., the claimed “specimen”). Besides the sample itself—e.g., biological structures, electronic elements or rock fragments—it is also possible for a sample vessel, a sample carrier, a microscope component such as a sample stage (i.e., the claimed “specimen that is currently loaded on a stage”) or areas of the same to be depicted.” Par. 0140; “The large language model is a deep artificial neural network which receives (among other things) a text from a user as input and generates an output that specifies parameters for a subsequent image generation (i.e., the claimed “yielding a natural language response that indicates how implementing the natural language instruction would affect the specimen”).” Par. 0074; “For example, a user can tell the large language model whether a single cell of a particular type or a cell cluster of the sample should be imaged. The large language model uses this information to identify the appropriate magnification for capturing either a single cell or a cell cluster, while the overview image is used to navigate to an appropriate location where the desired cell(s) is (are) present. Imaging parameters such as illumination intensity or fluorescence settings can be ascertained by the large language model as a function of the textual input (i.e., the claimed “yielding synthesized code that defines a graphical user-interface that is tailored to the specimen”) and the overview image (i.e., the claimed “model component that executes a large language model on the image or energy spectrum of the specimen, thereby yielding synthesized code”) without the user having to specify the illumination intensity or fluorescence excitation or detection channels. This enables a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings.” Par. 0032; “It is also possible for an entire experiment process with a sequence of different settings and imaging events to be defined (i.e., the claimed “synthesized code”) by the microscope settings ascertained by the large language model.” Par. 0053; “In cases where the microscope image properties of the microscope image comply with the microscope image properties derived from the textual input (i.e., by the claimed “large language model”), the microscope image (the claimed “tailored to the specimen”) is used further, e.g., displayed to a user, saved and/or used in a provided workflow (i.e. ,the claimed “synthesized code”).” Par. 0113] Ishikawa and Amthor pertain to artificial intelligence microscope systems and are analogous to the instant application. Accordingly, it would have been obvious to one of ordinary skill in the artificial intelligence microscope systems art to modify Ishikawa’s teachings of “LLM 232 to output a response (i.e., the claimed “response to receipt of the natural language instruction”)” of a “SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam” (Ishikawa, Par. 0012, Par. 0043) with the explicit teachings of “microscope image properties of the microscope image comply with the microscope image properties derived from the textual input (i.e., by the claimed “large language model”), the microscope image is used further, e.g., displayed to a user, saved and/or used in a provided workflow)” (Amthor, Par. 0113) and “entire experiment process with a sequence of different settings and imaging events to be defined (i.e., the claimed “synthesized code”) by the microscope settings ascertained by the large language model.” (Amthor, Par. 0053) taught by Amthor in order to “enable a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings.” (Amthor, Par. 0032) Regarding Claims 2 and 10, Ishikawa in view of Amthor has been discussed above. The combination further teaches: wherein the computer-executable components further comprise: [Ishikawa, see mapping applied to claim 1] a presenter component that runs the synthesized code, [Ishikawa, “The user interface 108 (i.e., the claimed “presenter component”) includes a display (not shown) and one or more input devices. The display can display image data output from the semiconductor evaluation tool 102, information output from the processor 106 (i.e., the claimed “runs the synthesized code”), and the like.” Par. 0011; “Alternatively, if one has learned a specific programming language, they can describe a desired function in natural language and input it into the general-purpose LLM 210, which can output source code (i.e., the claimed “yield synthesized code”) that implements that function.” Par. 0029; “The user interface 108 (i.e., the claimed “graphical user interface”) includes a display (not shown) and one or more input devices. The display can display image data output from the semiconductor evaluation tool 102, information output from the processor 106 (i.e., the claimed “yielding synthesized code that defines a graphical user-interface, and the like.” Par. 0011] thereby rendering or activating the graphical user-interface. [Ishikawa, “The user interface 108 (i.e., the claimed “presenter component”) includes a display (not shown) and one or more input devices. The display can display image data output (i.e., the claimed “rendering the graphical user-interface”) from the semiconductor evaluation tool 102, information output from the processor 106 (i.e., the claimed “visibly renders the natural language response”), and the like.” Par. 0011] Regarding Claim 17, Ishikawa in view of Amthor has been discussed above. The combination further teaches: 17. A computer program product for facilitating large language model assistance for charged-particle microscope operation, the computer program product comprising a non-transitory computer-readable memory having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: [Ishikawa, see mapping applied to claim 1] cause a scanning electron microscope to capture, according to a default microscopy protocol, an image or an energy spectrum of a specimen that is currently loaded on a stage of the scanning electron microscope; [Ishikawa, “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform (i.e., the claimed “energy spectrum”) based on the detection of secondary electrons and backscattered electrons emitted from a sample (i.e., the claimed “specimen”) when the sample (i.e., the claimed “specimen”) is irradiated with an electron beam.” Par. 0012; Ishikawa, see mapping applied to claim 1] execute a large language model on the image or energy spectrum of the specimen, [Ishikawa, see mapping applied to claim 1] thereby yielding synthesized code that defines a graphical user-interface for the charged-particle microscope that is tailored to the specimen; and [Ishikawa, see mapping applied to claim 1] run the synthesized code, thereby rendering or activating the graphical user-interface. [Ishikawa, see mapping applied to claim 1] Claims 3, 11, 18 are rejected under 35 U.S.C. 103(a) as being unpatentable over Ishikawa in view of Amthor and Saur et al., (U.S. Patent Application Publication 2015/0241685), hereinafter referred to as Saur. Regarding Claims 3 and 11, Ishikawa in view of Amthor has been discussed above. The combination further teaches: wherein the charged-particle microscope corresponds to a plurality of configurable operating settings, and [Ishikawa, see mapping applied to claims 1 - 2; Amthor, see mapping applied to claims 1 - 2; Amthor, “In cases where the microscope image properties of the microscope image comply with the microscope image properties derived from the textual input, the microscope image is used further, e.g., displayed to a user, saved and/or used in a provided workflow.” Par. 0113; Ishikawa, “The purpose of this disclosure is to input a desired operation of a semiconductor device evaluation device (semiconductor evaluation tool 102 in FIG. 1) to the LLM, and to obtain an operation specification (evaluation recipe) (i.e., the claimed “configurable operating settings”) that can realize the request as a response from the LLM.” Par. 0030; Amthor, “A response A from the user U to the follow-up query Q is processed by the large language model LLM in order to either use the adjusted microscope settings 40B to capture the new microscope image 50B, or to modify the adjusted microscope settings 40B again based on the response A from the user U. For example, if the user responds that a photodamaging of the sample 10 is unacceptable (i.e., the claimed “destructive”), the large language model LLM (i.e., the claimed “large language model”) can (potentially after a further follow-up query Q and associated response A (i.e., the claimed “large language model infers are destructive to the specimen”)) increase the illumination duration and measurement duration (i.e., the claimed “first subset of the plurality of configurable operating settings”) in order to thereby achieve a better visibility of the particular cell type without increasing the illumination intensity. Alternatively, the large language model LLM can switch to an objective with a higher magnification (i.e., different subset of the plurality of configurable settings) and capture a plurality of laterally offset microscope images that are stitched together to form one image (image stitching), which can also achieve a better visibility of the particular cell type without increasing the illumination intensity (i.e., the claimed “hide a first subset of the plurality of configurable operating settings” to perform a different subset of the plurality of configurable operating settings).” Par. 0170; “large language model LLM can infer,” Par. 0213] wherein the synthesized code defines the graphical user-interface to hide a first subset of the plurality of configurable operating settings that the large language model infers are inapplicable or destructive to the specimen. [Ishikawa, see mapping applied to claims 1 - 2; Ishikawa, “The recipe generation module is configured to generate a recipe based on, for example, design data, simulated images generated by simulator 104 based on input of the design data, parameters input through user interface (i.e., the claimed “graphical user interface”) 108, and the like.” Par. 0020; Amthor, see mapping applied to claims 1 - 2; Amthor, “A response A from the user U to the follow-up query Q is processed by the large language model LLM in order to either use the adjusted microscope settings 40B to capture the new microscope image 50B, or to modify the adjusted microscope settings 40B again based on the response A from the user U. For example, if the user responds that a photodamaging of the sample 10 is unacceptable (i.e., the claimed “destructive”), the large language model LLM (i.e., the claimed “large language model”) can (potentially after a further follow-up query Q and associated response A (i.e., the claimed “large language model infers are destructive to the specimen”)) increase the illumination duration and measurement duration (i.e., the claimed “first subset of the plurality of configurable operating settings”) in order to thereby achieve a better visibility of the particular cell type without increasing the illumination intensity. Alternatively, the large language model LLM can switch to an objective with a higher magnification (i.e., different subset of the plurality of configurable settings) and capture a plurality of laterally offset microscope images that are stitched together to form one image (image stitching), which can also achieve a better visibility of the particular cell type without increasing the illumination intensity (i.e., the claimed “hide a first subset of the plurality of configurable operating settings” to perform a different subset of the plurality of configurable operating settings).” Par. 0170; “large language model LLM can infer,” Par. 0213] The combination fails to explicitly teach hide settings. However, Saur explicitly teaches: wherein the synthesized code defines the graphical user-interface to hide a first subset of the plurality of configurable operating settings that the large language model infers are inapplicable or destructive to the specimen. [Saur, “the control unit (6) for setting and displaying the digital markers is designed to show and hide the digital markers in the display (5).” Par. 0082] Ishikawa, Amthor and Saur pertain to microscope systems and are analogous to the instant application. Accordingly, it would have been obvious to one of ordinary skill in the microscope systems art to modify Ishikawa’s teachings of “LLM 232 to output a response (i.e., the claimed “response to receipt of the natural language instruction”)” of a “SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam” (Ishikawa, Par. 0012, Par. 0043) with the explicit teachings of “microscope image properties of the microscope image comply with the microscope image properties derived from the textual input (i.e., by the claimed “large language model”), the microscope image is used further, e.g., displayed to a user, saved and/or used in a provided workflow)” (Amthor, Par. 0113) and “entire experiment process with a sequence of different settings and imaging events to be defined (i.e., the claimed “synthesized code”) by the microscope settings ascertained by the large language model.” (Amthor, Par. 0053) taught by Amthor and the explicit teachings of “setting and displaying” to “show or hide” (Saur, Par. 0082) taught by Saur in order to “enable a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings” (Amthor, Par. 0032) and enable “visible as desired” (Saur, Par. 0020). Regarding Claim 18, Ishikawa in view of Amthor and Saur has been discussed above. The combination further teaches: wherein the scanning electron microscope corresponds to a plurality of configurable operating settings, and [Ishikawa, “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform (i.e., the claimed “energy spectrum”) based on the detection of secondary electrons and backscattered electrons emitted from a sample (i.e., the claimed “specimen”) when the sample (i.e., the claimed “specimen”) is irradiated with an electron beam.” Par. 0012; Ishikawa, see mapping applied to claim 2; Amthor, see mapping applied to claim 2; Saur, see mapping applied to claim 2] wherein the synthesized code defines the graphical user-interface to hide a first subset of the plurality of configurable operating settings that the large language model infers are inapplicable or destructive to the specimen. [Ishikawa, see mapping applied to claim 2; Amthor, see mapping applied to claim 2; Saur, see mapping applied to claim 2] Claims 4 - 5, 12 - 13, 19 are rejected under 35 U.S.C. 103(a) as being unpatentable over Ishikawa in view of Amthor, Saur, and Sinha et al., (U.S. Patent 12,254,005), hereinafter referred to as Sinha. Regarding Claims 4 and 12, Ishikawa in view of Amthor and Saur has been discussed above. The combination further teaches: wherein the computer-executable components further comprise: [Ishikawa, see mapping applied to claim 1] an access component that accesses a plurality of past natural language microscopy queries provided by a user of the charged-particle microscope, [Ishikawa, “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam.” Par. 0012; “The purpose of this disclosure is to input a desired operation (i.e., the claimed “natural language microscopy queries provided by a user”) of a semiconductor device evaluation device (semiconductor evaluation tool (i.e., the claimed “access component that accesses a natural language queries”) 102 in FIG. 1) to the LLM, and to obtain an operation specification (evaluation recipe) that can realize the request as a response from the LLM (i.e., the claimed “nature language workflow query”). In other words, the purpose is to operate the semiconductor device evaluation device using natural language (i.e., the claimed “natural language queries provided by a user”), etc.” Par. 0030; “The recipe generation module is configured to generate a recipe based on, for example, design data, simulated images generated by simulator 104 based on input of the design data, parameters input through user interface (i.e., the claimed “plurality of past natural language microscopy queries provided by a user”) 108, and the like.” Par. 0020; “This allows the system to learn knowledge in various fields (sometimes called domain knowledge) in advance using character strings, and when a question related to that field is given, it can output an answer based on knowledge it has learned in advance (i.e., the claimed “plurality of past natural language microscopy queries provided by a user”).” Par. 0004; “The LLM performs learning and re-learning in advance (i.e., the claimed “past natural language microscopy queries provided by a user”) so as to output the above specification data.” Par. 0050] wherein the large language model receives as input the plurality of past natural language microscopy queries in addition to the image or energy spectrum of the specimen, [Ishikawa, “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam.” Par. 0012; “The purpose of this disclosure is to input a desired operation (i.e., the claimed “natural language microscopy queries provided by a user”) of a semiconductor device evaluation device (semiconductor evaluation tool (i.e., the claimed “access component that accesses a natural language queries”) 102 in FIG. 1) to the LLM, and to obtain an operation specification (evaluation recipe) that can realize the request as a response from the LLM (i.e., the claimed “nature language workflow query”). In other words, the purpose is to operate the semiconductor device evaluation device using natural language (i.e., the claimed “natural language queries provided by a user”), etc.” Par. 0030; Amthor, “The large language model uses this information to identify the appropriate magnification for capturing either a single cell or a cell cluster, while the overview image is used to navigate to an appropriate location where the desired cell(s) is (are) present. Imaging parameters such as illumination intensity or fluorescence settings can be ascertained by the large language model as a function of the textual input (i.e., the claimed “natural language query”) and the overview image (i.e., the claimed “natural language microscopy queries in addition to the image of the specimen”) without the user having to specify the illumination intensity or fluorescence excitation or detection channels. This enables a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings.” Par. 0032] such that the graphical user-interface is tailored to an inferred microscopy skill level of the user in addition to being tailored to the specimen. [Ishikawa, “Depending on the user of the evaluation device, there may be parameters that are prohibited from being used in the recipe or that impose restrictions in the recipe (i.e., the claimed “tailored to an inferred microscopy skill level of the user”). Such user-specific constraints (i.e., the claimed “tailored to an inferred microscopy skill level of the user”) (parameter items that impose constraints and the content of the constraints) are called local rules. Since the local rules are specific to the user, it is more appropriate to learn them as domain knowledge specific to the semiconductor device that the user is evaluating,” Par. 0033; “The recipe generation module is configured to generate a recipe based on, for example, design data, simulated images generated by simulator 104 based on input of the design data, parameters input through user interface (i.e., the claimed “plurality of past natural language microscopy queries provided by a user”) 108, and the like.” Par. 0020] The combination fails to explicitly teach past natural language queries. However, Sinha teaches: an access component that accesses a plurality of past natural language microscopy queries provided by a user of the charged-particle microscope, [Sinha, For example, and without limitation, user profile 176 may include previously received natural language queries and/or any user inputs 124 received by system. In one or more embodiments, previous user inputs 124 may be used to train LLM 132 to output data that is personalized to user.” Col. 25:42-45; “processor 108 and/or LLM 132 may be configured to classify previous user inputs 124, natural language queries and the like (i.e., the claimed “access component that accesses a plurality of past natural language microscopy queries provided by a user”) as described in this disclosure to one or more language groupings.” Col. 27:17-19] Ishikawa, Amthor, Saur and Sinha pertain to imaging/image processing systems and are analogous to the instant application. Accordingly, it would have been obvious to one of ordinary skill in the imaging/image processing systems art to modify Ishikawa’s teachings of “LLM 232 to output a response (i.e., the claimed “response to receipt of the natural language instruction”)” of a “SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam” (Ishikawa, Par. 0012, Par. 0043) with the explicit teachings of “microscope image properties of the microscope image comply with the microscope image properties derived from the textual input (i.e., by the claimed “large language model”), the microscope image is used further, e.g., displayed to a user, saved and/or used in a provided workflow)” (Amthor, Par. 0113) and “entire experiment process with a sequence of different settings and imaging events to be defined (i.e., the claimed “synthesized code”) by the microscope settings ascertained by the large language model.” (Amthor, Par. 0053) taught by Amthor, the explicit teachings of “setting and displaying” to “show or hide” (Saur, Par. 0082), and the explicit teachings of “previously received natural language queries” (Sinha, Col. 25:42-45) taught by Sinha in order to “enable a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings” (Amthor, Par. 0032), enable settings to be “visible as desired” (Saur, Par. 0020), and enable “personalization to user.” (Sinha, Col. 25:42-45). Regarding Claims 5 and 13, Ishikawa in view of Amthor, Saur and Sinha has been discussed above. The combination further teaches: wherein the synthesized code defines the graphical user-interface to hide a second subset of the plurality of configurable operating settings that the large language model infers are excessively complicated for the user. [Ishikawa, see mapping applied to claim 3; Amthor, see mapping applied to claim 3; Saur, see mapping applied to claim 3; Ishikawa, “Depending on the user of the evaluation device, there may be parameters that are prohibited from being used (i.e., the claimed “hide”) in the recipe or that impose restrictions in the recipe (i.e., the claimed “infers are excessively complicated for the user”). Such user-specific constraints (i.e., the claimed “infers are excessively complicated for the user”) (parameter items that impose constraints and the content of the constraints) are called local rules. Since the local rules are specific to the user, it is more appropriate to learn them as domain knowledge specific to the semiconductor device that the user is evaluating,” Par. 0033; “The recipe generation module is configured to generate a recipe based on, for example, design data, simulated images generated by simulator 104 based on input of the design data, parameters input through user interface (i.e., the claimed “plurality of past natural language microscopy queries provided by a user”) 108, and the like.” Par. 0020; Claims 5 and 13 are directed to repeating the subject matter for a second subset. However, second subset/repeating steps known from prior art is straightforward, amounts to the normal use of the teachings of Ishikawa in view of Amthor, Saur and Sinha and are rejected under similar rationale. Regarding Claim 19, Ishikawa in view of Amthor, Saur and Sinha has been discussed above. The combination further teaches: access a plurality of past natural language microscopy queries provided by a user of the scanning electron microscope, [Ishikawa, “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform (i.e., the claimed “energy spectrum”) based on the detection of secondary electrons and backscattered electrons emitted from a sample (i.e., the claimed “specimen”) when the sample (i.e., the claimed “specimen”) is irradiated with an electron beam.” Par. 0012; Ishikawa, see mapping applied to claim 4; Amthor, see mapping applied to claim 4; Saur, see mapping applied to claim 4; Sinha, see mapping applied to claim 4] wherein the large language model receives as input the plurality of past natural language microscopy queries in addition to the image or energy spectrum of the specimen, [Ishikawa, see mapping applied to claim 4; Amthor, see mapping applied to claim 4; Saur, see mapping applied to claim 4; Sinha, see mapping applied to claim 4] such that the graphical user-interface hides a second subset of the plurality of configurable operating settings that the large language model infers are excessively complicated for the user. [Ishikawa, see mapping applied to claim 5; Amthor, see mapping applied to claim 5; Saur, see mapping applied to claim 5; Sinha, see mapping applied to claim 5] Claims 6 - 7, 14 - 15, 20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Ishikawa in view of Amthor, Saur, Weber et al., ("Calibrating Coordinate System Alignment in a Scanning Transmission Electron Microscope using a Digital Twin," arXiv:2403.08538, 2024), hereinafter referred to as Weber, and Batlkhagva et al., ("Digital Twin: Virtual Hardware Simulator for a Transmission electron microscope," Eindhoven University of Technology, PDEng Report, 2020). Regarding Claims 6 and 14, Ishikawa in view of Amthor and Saur has been discussed above. The combination further teaches: wherein the computer-executable components further comprise: [Ishikawa, see mapping applied to claim 1] wherein the large language model receives as input the current health status in addition to the image or energy spectrum of the specimen, [Ishikawa, see mapping applied to claims 1 - 2; Amthor, see mapping applied to claims 1 - 2] such that the graphical user-interface is tailored to the current health status in addition to being tailored to the specimen. [Ishikawa, see mapping applied to claims 1 - 2; Amthor, see mapping applied to claims 1 - 2] The combination fails to teach a context component that accesses a current health status of the charged-particle microscope as indicated by a digital twin of the charged-particle microscope. However, Weber teaches: a context component that accesses a current health status of the charged-particle microscope as indicated by a digital twin of the charged-particle microscope, [Weber, “4D scanning transmission electron microscopy (STEM) (i.e., the claimed “charged-particle microscope)” Pg. 1; “a digital twin is used to match a set of models and their parameters with the action of a real-world instrument.” Pg. 1; “It uses automated data processing and a digital twin of the microscope (i.e., the claimed “charged-particle microscope”),” Pg. 2; “It uses automated data processing (i.e., the claimed “context component”) and a digital twin of the microscope,” Pg. 2] Ishikawa in view of Amthor, Saur and Weber fails to teach current health status. However, Batlkhagva teaches: a context component that accesses a current health status of the charged-particle microscope as indicated by a digital twin of the charged-particle microscope, [Batlkhagva, “Also, the software execution on the hardware can be monitored in real time. Moreover, the simulator (i.e., the claimed “context component”) can draw sensor data from a real microscope to visualize the hardware behavior in a 3D environment.” Pg. viii, “The second conclusion is that the concept of the digital twin is beneficial for product maintenance (i.e., the claimed “current health status”) by visualizing the sensor data of the real system. From the visualization, engineers are able to inspect the real-time motion behavior of physical hardware (i.e., the claimed “current health status”) to diagnose problems.” Pg. viii] wherein the large language model receives as input the current health status in addition to the image or energy spectrum of the specimen, [Batlkhagva, “Also, the software execution on the hardware can be monitored in real time. Moreover, the simulator (i.e., the claimed “context component”) can draw sensor data from a real microscope to visualize the hardware behavior in a 3D environment.” Pg. viii, “The second conclusion is that the concept of the digital twin is beneficial for product maintenance (i.e., the claimed “current health status”) by visualizing the sensor data of the real system. From the visualization, engineers are able to inspect the real-time motion behavior of physical hardware (i.e., the claimed “current health status”) to diagnose problems.” Pg. viii] such that the graphical user-interface is tailored to the current health status in addition to being tailored to the specimen. [Batlkhagva, “Also, the software execution on the hardware can be monitored in real time. Moreover, the simulator (i.e., the claimed “context component”) can draw sensor data from a real microscope to visualize the hardware behavior in a 3D environment.” Pg. viii, “The second conclusion is that the concept of the digital twin is beneficial for product maintenance (i.e., the claimed “current health status”) by visualizing the sensor data of the real system. From the visualization, engineers are able to inspect the real-time motion behavior of physical hardware (i.e., the claimed “current health status”) to diagnose problems.” Pg. viii] Ishikawa, Amthor, Saur, Weber and Batlkhagva and pertain to microscope systems and are analogous to the instant application. Accordingly, it would have been obvious to one of ordinary skill in the microscope systems art to modify Ishikawa’s teachings of “LLM 232 to output a response (i.e., the claimed “response to receipt of the natural language instruction”)” of a “SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam” (Ishikawa, Par. 0012, Par. 0043) with the explicit teachings of “microscope image properties of the microscope image comply with the microscope image properties derived from the textual input (i.e., by the claimed “large language model”), the microscope image is used further, e.g., displayed to a user, saved and/or used in a provided workflow)” (Amthor, Par. 0113) and “entire experiment process with a sequence of different settings and imaging events to be defined (i.e., the claimed “synthesized code”) by the microscope settings ascertained by the large language model.” (Amthor, Par. 0053) taught by Amthor, the explicit teachings of “setting and displaying” to “show or hide” (Saur, Par. 0082) taught by Saur, the teachings of “digital twin is used to match a set of models and their parameters with the action of a real-world instrument” (Weber, Pg. 1) taught by Weber, and the teachings “product maintenance”/ “real-time motion behavior of physical hardware”/ current health status (Batlkhagva, Pg. viii) taught by Batlkhagva in order to “enable a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings” (Amthor, Par. 0032), enable “visible as desired” (Saur, Par. 0020), “eliminate error sources” with a “digital twin that matches the actual transformation by the microscope” (Weber, Pg. 2), and enable “service team to inspect problems when maintenance is required” and “as a result, problem diagnosis is faster when TEM misbehaves” (Batlkhagva, Pg. viii). Regarding Claims 7 and 15, Ishikawa in view of Amthor, Saur, Weber and Batlkhagva has been discussed above. The combination further teaches: wherein the synthesized code defines the graphical user-interface to hide a second subset of the plurality of configurable operating settings that the large language model infers are not currently safely invokable on the charged-particle microscope. [Ishikawa, see mapping applied to claim 3; Amthor, see mapping applied to claim 3; Saur, see mapping applied to claim 3; “For example, if the user responds that a photodamaging of the sample 10 is unacceptable (i.e., the claimed “currently safely invokable”), the large language model LLM (i.e., the claimed “large language model”) can (potentially after a further follow-up query Q and associated response A (i.e., the claimed “large language model infers are not currently safely invokable”)) increase the illumination duration and measurement duration (i.e., the claimed “first subset of the plurality of configurable operating settings”) in order to thereby achieve a better visibility of the particular cell type without increasing the illumination intensity. Alternatively, the large language model LLM can switch to an objective with a higher magnification (i.e., different subset of the plurality of configurable settings) and capture a plurality of laterally offset microscope images that are stitched together to form one image (image stitching), which can also achieve a better visibility of the particular cell type without increasing the illumination intensity (i.e., the claimed “hide a second subset of the plurality of configurable operating settings” to perform a different subset of the plurality of configurable operating settings).” Par. 0170; “large language model LLM can infer,” Par. 0213; Claims 7 and 15 are directed to repeating the subject matter for a second subset. However, second subset/repeating steps known from prior art is straightforward, amounts to the normal use of the teachings of Ishikawa in view of Amthor, Saur, Weber and Batlkhagva and are rejected under similar rationale.] Regarding Claim 20, Ishikawa in view of Amthor, Saur, Weber and Batlkhagva has been discussed above. The combination further teaches: wherein the program instructions are further executable to cause the processor to: [Ishikawa, see mapping applied to claim 1] access a current health status of the scanning electron microscope as indicated by a digital twin of the scanning electron microscope, [Ishikawa, “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform (i.e., the claimed “energy spectrum”) based on the detection of secondary electrons and backscattered electrons emitted from a sample (i.e., the claimed “specimen”) when the sample (i.e., the claimed “specimen”) is irradiated with an electron beam.” Par. 0012; Ishikawa, see mapping applied to claim 6; Amthor, see mapping applied to claim 6; Saur, see mapping applied to claim 6; Weber, see mapping applied to claim 6, Batlkhagva, see mapping applied to claim 6] wherein the large language model receives as input the current health status in addition to the image or energy spectrum of the specimen, [Ishikawa, see mapping applied to claim 6; Amthor, see mapping applied to claim 6; Saur, see mapping applied to claim 6; Weber, see mapping applied to claim 6, Batlkhagva, see mapping applied to claim 6] such that the graphical user-interface hides a second subset of the plurality of configurable operating settings that the large language model infers are not currently safely invokable on the charged-particle microscope. [Ishikawa, see mapping applied to claim 7; Amthor, see mapping applied to claim 7; Saur, see mapping applied to claim 7; Weber, see mapping applied to claim 7, Batlkhagva, see mapping applied to claim 7] Claims 8 and 16 are rejected under 35 U.S.C. 103(a) as being unpatentable over Ishikawa in view of Amthor and Larson et al., (U.S. Patent Application Publication 2025/0328565), hereinafter referred to as Larson. Regarding Claims 8 and 16, Ishikawa in view of Amthor has been discussed above. The combination further teaches: wherein the large language model receives as input the image or energy spectrum of the specimen and a graphical user-interface prompt, [Ishikawa, see mapping applied to claim 1; Amthor, see mapping applied to claim 1] wherein the image or energy spectrum and the graphical user-interface prompt complete a forward pass through the large language model, and [Ishikawa, see mapping applied to claim 1; Amthor, see mapping applied to claim 1] wherein the large language model produces as output the synthesized code. [Ishikawa, see mapping applied to claim 1; Amthor, see mapping applied to claim 1] The combination fails to explicitly teach forward pass. However, Larson teaches: wherein the image or energy spectrum and the graphical user-interface prompt complete a forward pass through the large language model, and [Larson, “complete a forward pass through the LLM 306 (i.e., the claimed “large language model”),” Par. 0186] Ishikawa, Amthor and Larson pertain to artificial intelligence systems and are analogous to the instant application. Accordingly, it would have been obvious to one of ordinary skill in the artificial intelligence systems art to modify Ishikawa’s teachings of “LLM 232 to output a response (i.e., the claimed “response to receipt of the natural language instruction”)” of a “SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam” (Ishikawa, Par. 0012, Par. 0043) with the explicit teachings of “large language model as a function of the textual input (i.e., the claimed “natural language instruction”) and the overview image (i.e., the claimed “model component that executes a large language model on both the natural language instruction and the image of the specimen”)” (Amthor, Par. 0032) taught by Amthor and the teachings of “completing a forward pass through the LLM” (Larson, Par. 0186) taught by Larson in order to “enable a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings” (Amthor, Par. 0032) and automatically answer “inquiries regarding how the scientific instrument should be operated, maintained, serviced, or troubleshot.” (Larson, Par. 0027) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chandran et al., (U.S. Patent Application Publication 2025/0139342) teaches digital twins. Lu et al., (U.S. Patent Application Publication 2024/0272926) teaches digital twins. Robert Jose et al., (U.S. Patent 12,518,112) teaches digital plurality of previous natural language queries for a specific user. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EUNICE LEE whose telephone number is 571-272-1886. The examiner can normally be reached M-F 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EUNICE LEE/Examiner, Art Unit 2656 /BHAVESH M MEHTA/Supervisory Patent Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Jun 19, 2024
Application Filed
Jan 28, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603078
GENERATING SPEECH DATA USING ARTIFICIAL INTELLIGENCE TECHNIQUES
2y 5m to grant Granted Apr 14, 2026
Patent 12597365
AUTOMATIC TRANSLATION BETWEEN SIGN LANGUAGE AND SPOKEN LANGUAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12585876
METHOD OF TRAINING POS TAGGING MODEL, COMPUTER-READABLE RECORDING MEDIUM AND POS TAGGING METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12579385
EMBEDDED TRANSLATE, SUMMARIZE, AND AUTO READ
2y 5m to grant Granted Mar 17, 2026
Patent 12566928
READABILITY BASED CONFIDENCE SCORE FOR LARGE LANGUAGE MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+27.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 27 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month