DETAILED ACTION
This communication is in response to the Application filed on June 19, 2024.
Claims 1 - 20 are pending and have been examined.
Claims 1, 9 and 17 are independent.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on August 13, 2024 and February 5, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings filed on June 19, 2024 have been accepted and considered by the Examiner.
Nonstatutory Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1 - 2, 5 - 6, 9 - 11, 13 - 14, 16 - 18 and 20 of the instant Application are provisionally rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claims 1 - 20 of copending application 18/747,764 (hereinafter ‘764) in view of Amthor et al., (U.S. Patent Application Publication 2025/0102788).
Regarding independent claims 1, 9, and 17, while disclosing a state component that causes the charged-particle microscope to capture, according to a default microscopy protocol, an image or an energy spectrum of a specimen that is currently loaded on a stage of the charged-particle microscope, a model component that executes a large language model, and a large language model executing on the image or energy spectrum of a specimen, copending application does not explicitly teach workflow.
Amthor et al., (U.S. Patent Application Publication 2025/0102788) explicitly teaches workflow (Amthor, Par. 0113). It would have been obvious to modify claims 1, 9, and 17 of the copending application with Amthor’s explicit teachings of workflow (Amthor, Par. 0113) in order to “enable a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings” (Amthor, Par. 0032).
Dependent claims 2, 5 - 6, 10 - 11, 13 - 14, 16, 18 and 20 are also similarly analyzed and rejected over claims 1 - 20 of the copending application ‘561 in view of Amthor.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claims 1 - 2, 5 - 10, 13 - 16 and 18 - 20 of the instant Application are provisionally rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claims 1 - 20 of copending application 18/662,561 (hereinafter ‘561) in view of Amthor et al., (U.S. Patent Application Publication 2025/0102788).
Regarding independent claims 1, 9, and 17, while disclosing a state component that causes the charged-particle microscope to capture, according to a default microscopy protocol, an image or an energy spectrum of a specimen that is currently loaded on a stage of the charged-particle microscope, a model component that executes a large language model, and a large language model executing on the image or energy spectrum of a specimen, copending application does not explicitly teach workflow.
Amthor et al., (U.S. Patent Application Publication 2025/0102788) explicitly teaches workflow (Amthor, Par. 0113). It would have been obvious to modify claims 1, 9, and 17 of the copending application with Amthor’s explicit teachings of workflow (Amthor, Par. 0113) in order to “enable a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings” (Amthor, Par. 0032).
Dependent claims 2, 5 - 8, 10, 13 - 16 and 18 - 20 are also similarly analyzed and rejected over claims 1 - 20 of the copending application ‘561 in view of Amthor.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1 - 3, 5 - 6, 9 - 11, 13 - 14, 17 - 18, 20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Ishikawa et al., (WO2025027758A1), hereinafter referred to as Ishikawa, in view of Amthor et al., (U.S. Patent Application Publication 2025/0102788), hereinafter referred to as Amthor.
Regarding Claims 1 and 9, Ishikawa teaches:
1. A system, comprising, and 9. A computer-implemented method, comprising:
a processor that executes computer-executable components stored in a non-transitory computer-readable memory, wherein the computer-executable components comprise: [Ishikawa, “The computer system 101 includes one or more processors 106 (including at least one of a CPU and a GPU), a memory 107, and a user interface 108.” Par. 0011; “In addition to the above, memory 107 (e.g., a non-transitory computer-readable storage medium) may store, for example: (a) an operating program executed by one or more processors included in semiconductor evaluation tool 102;” Par. 0019]
an access component that accesses a natural language workflow query associated with a charged-particle microscope; [Ishikawa, “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam.” Par. 0012; “The purpose of this disclosure is to input a desired operation (i.e., the claimed “natural language workflow query”) of a semiconductor device evaluation device (semiconductor evaluation tool (i.e., the claimed “access component that accesses a natural language query”) 102 in FIG. 1) to the LLM, and to obtain an operation specification (evaluation recipe) that can realize the request as a response from the LLM (i.e., the claimed “nature language workflow query”). In other words, the purpose is to operate the semiconductor device evaluation device using natural language (i.e., the claimed “natural language workflow query”), etc.” Par. 0030; Referring the Specification Par. 0037 of the instant Application, “workflow” are “steps or sub-steps, or other actions should be performed.” ]
wherein the natural language workflow query requests or commands identification of how to perform a microscopy workflow on the charged-particle microscope; [Ishikawa, “In other words, the purpose is to operate the semiconductor device evaluation device (i.e., the claimed “charged-particle microscope”) using natural language, etc.” Par. 0030; “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam.” Par. 0012; “The purpose of this disclosure is to input a desired operation (i.e., the claimed “natural language workflow query”) of a semiconductor device evaluation device (semiconductor evaluation tool (i.e., the claimed “access component that accesses a natural language query”) 102 in FIG. 1) to the LLM, and to obtain an operation specification (evaluation recipe) that can realize the request (i.e., the claimed “natural language workflow query requests identification of how to perform a microscopy workflow on the charged-particle microscope”) as a response from the LLM (i.e., the claimed “nature language workflow query”). In other words, the purpose is to operate the semiconductor device evaluation device (i.e., the claimed “charged-particle microscope”) using natural language (i.e., the claimed “natural language workflow query requests identification of how to perform a microscopy workflow on the charged-particle microscope”), etc.” Par. 0030; Referring the Specification Par. 0037 of the instant Application, “workflow” are “steps or sub-steps, or other actions should be performed.” ]
a state component that causes, in response to receipt of the natural language workflow query, the charged-particle microscope to capture, according to a default microscopy protocol, an image or an energy spectrum of a specimen that is currently loaded on a stage of the charged-particle microscope; and [Ishikawa, “In other words, the purpose is to operate the semiconductor device evaluation device (i.e., the claimed “charged-particle microscope”) using natural language, etc.” Par. 0030; “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope”)), which is a device that generates an image (i.e., the claimed “captures an image”) and a luminance signal waveform (i.e., the claimed “energy spectrum”) based on the detection of secondary electrons and backscattered electrons emitted from a sample (i.e., the claimed “specimen that is currently loaded on a stage”) when the sample (i.e., the claimed “specimen that is currently loaded on a stage”) is irradiated with an electron beam.” Par. 0012; “This allows the semiconductor LLM 232 to output a response (i.e., the claimed “response to receipt of the natural language workflow query”) that reflects the domain knowledge when an input statement relating to an evaluation apparatus or semiconductor device is given (i.e., the claimed “receipt of the natural language workflow query”).” Par. 0043; “an observed image of a semiconductor device acquired by the evaluation device is abnormal, and measures to remedy the abnormality.” Par. 0050; “The purpose of this disclosure is to input a desired operation (i.e., the claimed “natural language workflow query”) of a semiconductor device evaluation device (semiconductor evaluation tool (i.e., the claimed “access component that accesses a natural language query”) 102 in FIG. 1) to the LLM, and to obtain an operation specification (evaluation recipe) (i.e., the claimed “default microscopy protocol”) that can realize the request as a response from the LLM (i.e., the claimed “nature language workflow query”).” Par. 0030]
a model component that executes a large language model on both the natural language workflow query and the image or energy spectrum of the specimen, thereby yielding a specimen-tailored natural language response to the natural language workflow query. [Ishikawa, “In other words, the purpose is to operate the semiconductor device evaluation device (i.e., the claimed “charged-particle microscope”) using natural language, etc.” Par. 0030; “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform (i.e., the claimed “energy spectrum”) based on the detection of secondary electrons and backscattered electrons emitted from a sample (i.e., the claimed “specimen that is currently loaded on a stage”) when the sample (i.e., the claimed “specimen that is currently loaded on a stage”) is irradiated with an electron beam.” Par. 0012; “This allows the semiconductor LLM 232 to output a response (i.e., the claimed “response to receipt of the natural language workflow query”) that reflects the domain knowledge when an input statement relating to an evaluation apparatus or semiconductor device is given (i.e., the claimed “receipt of the natural language workflow query”).” Par. 0043; “an observed image of a semiconductor device acquired by the evaluation device is abnormal, and measures to remedy the abnormality.” Par. 0050; “The sentence production module 234 (i.e. the claimed “model component”) uses the sentence production model 115 to output an output string (i.e., the claimed “specimen-tailored natural language response to the natural language workflow query”) as a response to an input string.” Par. 0026]
Ishikawa fails to explicitly teach both the natural language workflow query and the image.
However, Amthor teaches:
a model component that executes a large language model on both the natural language workflow query and the image or energy spectrum of the specimen, thereby yielding a specimen-tailored natural language response to the natural language workflow query. [Amthor, “In cases where the microscope image properties of the microscope image comply with the microscope image properties derived from the textual input (i.e., the claimed “natural language query”), the microscope image is used further, e.g., displayed to a user, saved and/or used in a provided workflow (i.e., the claimed “natural language workflow”).” Par. 0113; “Holburn, D., et al., “Voice Control of the Scanning Electron Microscope Using a Low-Cost Virtual Assistant”, Microsc. Microanal. 27 (Suppl 1), 2021, doi: 10.1017/S1431927621009685. A user can give voice commands here such as “Autofocus”, “Capture image”, “Move x-axis by 100 steps”, whereupon the microscope implements these commands accordingly.” Par. 0004; “In principle, a structure or object depicted in microscope images and overview images can be any structure or object (i.e., the claimed “specimen”). Besides the sample itself—e.g., biological structures, electronic elements or rock fragments—it is also possible for a sample vessel, a sample carrier, a microscope component such as a sample stage (i.e., the claimed “specimen that is currently loaded on a stage”) or areas of the same to be depicted.” Par. 0140; “The large language model is a deep artificial neural network which receives (among other things) a text from a user as input (i.e., the claimed “natural language workflow query”) and generates an output that specifies parameters for a subsequent image generation (i.e., the claimed “yielding a specimen-tailored natural language response to the natural language workflow query”).” Par. 0074; “For example, a user can tell the large language model (i.e., the claimed “natural language workflow query”) whether a single cell of a particular type or a cell cluster of the sample should be imaged. The large language model uses this information to identify the appropriate magnification (i.e., the claimed “specimen-tailored natural language response to the natural language workflow query”) for capturing either a single cell or a cell cluster, while the overview image is used to navigate to an appropriate location where the desired cell(s) is (are) present. Imaging parameters (i.e., the claimed “thereby yielding a specimen-tailored natural language response to the natural language workflow query”) such as illumination intensity or fluorescence settings can be ascertained by the large language model as a function of the textual input (i.e., the claimed “natural language workflow query”) and the overview image (i.e., the claimed “model component that executes a large language model on both the natural language workflow query and the image of the specimen”) without the user having to specify the illumination intensity or fluorescence excitation or detection channels. This enables a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings.” Par. 0032]
Ishikawa and Amthor pertain to artificial intelligence microscope systems and are analogous to the instant application. Accordingly, it would have been obvious to one of ordinary skill in the artificial intelligence microscope systems art to modify Ishikawa’s teachings of “LLM 232 to output a response (i.e., the claimed “response to receipt of the natural language workflow query”)” of a “SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam” (Ishikawa, Par. 0012, Par. 0043) with the explicit teachings of “large language model as a function of the textual input (i.e., the claimed “natural language workflow query”) and the overview image (i.e., the claimed “model component that executes a large language model on both the natural language workflow query and the image of the specimen”)” (Amthor, Par. 0032) taught by Amthor in order to “enable a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings” (Amthor, Par. 0032).
Regarding Claims 2 and 10, Ishikawa in view of Amthor has been discussed above. The combination further teaches:
wherein the computer-executable components further comprise: [Ishikawa, see mapping applied to claim 1]
a presenter component that: [Ishikawa, “The user interface 108 (i.e., the claimed “presenter component”) includes a display (not shown) and one or more input devices. The display can display image data output from the semiconductor evaluation tool 102, information output from the processor 106, and the like.” Par. 0011]
visibly renders the specimen-tailored natural language response or a visual graphic associated with the specimen-tailored natural language response on an electronic display associated with the charged-particle microscope; [Ishikawa, see mapping applied to claim 1; Ishikawa, “The user interface 108 (i.e., the claimed “presenter component”) includes a display (not shown) and one or more input devices. The display can display image data output (i.e., the claimed “visibly renders the specimen-tailored natural language response or a visual graphic associated with the specimen-tailored natural language response on an electronic display associated with the charged-particle microscope”) from the semiconductor evaluation tool 102, information output from the processor 106 (i.e., the claimed “visibly renders the specimen-tailored natural language response”), and the like.” Par. 0011]
transmits the specimen-tailored natural language response to a computing device associated with the charged-particle microscope. [Ishikawa, see mapping applied to claim 1; Ishikawa, “The user interface 108 (i.e., the claimed “presenter component”) includes a display (not shown) and one or more input devices. The display can display image data output from the semiconductor evaluation tool 102, information output from the processor 106 (i.e., the claimed “transmits the specimen-tailored natural language response to a computing device associated with the charged-particle microscope”), and the like.” Par. 0011]
Regarding Claims 3 and 11, Ishikawa in view of Amthor has been discussed above. The combination further teaches:
wherein the specimen-tailored natural language response describes a tutorial for performing the microscopy workflow on the specimen, [Ishikawa, see mapping applied to claim 1; Amthor, see mapping applied to claim 1; Amthor, “In cases where the microscope image properties of the microscope image comply with the microscope image properties derived from the textual input, the microscope image is used further, e.g., displayed to a user, saved and/or used in a provided workflow (i.e., the claimed “natural language workflow”).” Par. 0113; “The purpose of this disclosure is to input a desired operation (i.e., the claimed “natural language workflow query”) of a semiconductor device evaluation device (semiconductor evaluation tool 102 in FIG. 1) to the LLM, and to obtain an operation specification (evaluation recipe) (i.e., the claimed “tutorial for performing the microscopy workflow on the specimen”) that can realize the request as a response (i.e., the claimed “specimen-tailored natural language response”) from the LLM.” Par. 0030]
wherein the tutorial omits one or more steps that are associated with the microscopy workflow but that the large language model infers are inapplicable or destructive to the specimen. [Amthor, “A response A from the user U to the follow-up query Q is processed by the large language model LLM in order to either use the adjusted microscope settings 40B to capture the new microscope image 50B, or to modify the adjusted microscope settings 40B again based on the response A from the user U. For example, if the user responds that a photodamaging of the sample 10 is unacceptable (i.e., the claimed “destructive”), the large language model LLM (i.e., the claimed “large language model”) can (potentially after a further follow-up query Q and associated response A (i.e., the claimed “large language model infers are destructive to the specimen”)) increase the illumination duration and measurement duration in order to thereby achieve a better visibility of the particular cell type without increasing the illumination intensity. Alternatively, the large language model LLM can switch to an objective with a higher magnification and capture a plurality of laterally offset microscope images that are stitched together to form one image (image stitching), which can also achieve a better visibility of the particular cell type without increasing the illumination intensity (i.e., the claimed “omit one or more steps”).” Par. 0170; “large language model LLM can infer,” Par. 0213]
Regarding Claims 5 and 13, Ishikawa in view of Amthor has been discussed above. The combination further teaches:
wherein the specimen-tailored natural language response indicates that the microscopy workflow is inapplicable or destructive to the specimen. [Ishikawa, see mapping applied to claims 1, 3; Amthor, see mapping applied to claims 1, 3; Amthor, “A response A from the user U to the follow-up query Q is processed by the large language model LLM in order to either use the adjusted microscope settings 40B to capture the new microscope image 50B, or to modify the adjusted microscope settings 40B again based on the response A from the user U. For example, if the user responds that a photodamaging of the sample 10 is unacceptable (i.e., the claimed “destructive”), the large language model LLM can (potentially after a further follow-up query Q and associated response A (i.e., the claimed “specimen-tailored natural language response”)) increase the illumination duration and measurement duration in order to thereby achieve a better visibility of the particular cell type without increasing the illumination intensity. Alternatively, the large language model LLM can switch to an objective with a higher magnification and capture a plurality of laterally offset microscope images that are stitched together to form one image (image stitching), which can also achieve a better visibility of the particular cell type without increasing the illumination intensity.” Par. 0170]
Regarding Claims 6 and 14, Ishikawa in view of Amthor has been discussed above. The combination further teaches:
wherein the specimen-tailored natural language response describes a tutorial for performing on the specimen an alternative microscopy workflow that the large language model infers is applicable or non-destructive to the specimen. [Ishikawa, see mapping applied to claims 1, 3; Amthor, see mapping applied to claims 1, 3, 5; Amthor, “A response A from the user U to the follow-up query Q is processed by the large language model LLM in order to either use the adjusted microscope settings 40B to capture the new microscope image 50B, or to modify the adjusted microscope settings 40B again based on the response A from the user U. For example, if the user responds that a photodamaging of the sample 10 is unacceptable (i.e., the claimed “destructive”), the large language model LLM can (potentially after a further follow-up query Q and associated response A (i.e., the claimed “specimen-tailored natural language response”)) increase the illumination duration and measurement duration in order to thereby achieve a better visibility of the particular cell type without increasing the illumination intensity. Alternatively, the large language model LLM can switch to an objective with a higher magnification (i.e. ,the claimed “alternative microscopy workflow”)) and capture a plurality of laterally offset microscope images that are stitched together to form one image (image stitching), which can also achieve a better visibility of the particular cell type without increasing the illumination intensity.” Par. 0170]
Regarding Claim 17, Ishikawa in view of Amthor has been discussed above. The combination further teaches:
17. A computer program product for facilitating large language model assistance for charged-particle microscope operation, the computer program product comprising a non-transitory computer-readable memory having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: [Ishikawa, see mapping applied to claim 1; “In addition to the above, memory 107 (e.g., a non-transitory computer-readable storage medium) may store, for example: (a) an operating program (i.e., the claimed “computer program product”) executed by one or more processors included in semiconductor evaluation tool 102;” Par. 0019]
access a plain text workflow query provided by a user of a scanning electron microscope, [Ishikawa, see mapping applied to claim 1; Amthor, see mapping applied to claim 1; Ishikawa, “The user interface 108 (i.e., the claimed “graphical user interface”) includes a display (not shown) and one or more input devices (i.e. the claimed “plain text workflow query provided by a user”). The display can display image data output from the semiconductor evaluation tool 102, information output from the processor 106, and the like.” Par. 0011; “The present disclosure has been made in consideration of the above-described problems, and aims to provide a technology that enables instructions for an evaluation device that evaluates semiconductor devices to be obtained from a language model by inputting a string of characters (i.e., the claimed “plain text workflow query provided by a user”) into the language model.” Par. 0006; “This allows instructions to be given to the evaluation device by means of a character string via the LLM. In other words, the evaluation device can be operated by a string of characters.” Par. 0044]
wherein the plain text workflow query requests or commands identification of how to perform a microscopy workflow on the scanning electron microscope; [Ishikawa, see mapping applied to claim 1; Amthor, see mapping applied to claim 1; Amthor, “A response A from the user U to the follow-up query Q is processed by the large language model LLM in order to either use the adjusted microscope settings 40B to capture the new microscope image (i.e., the claimed “perform a microscopy workflow”) 50B, or to modify the adjusted microscope settings 40B again based on the response A from the user U.” Par. 0170; Ishikawa, “The present disclosure has been made in consideration of the above-described problems, and aims to provide a technology that enables instructions for an evaluation device that evaluates semiconductor devices to be obtained from a language model by inputting a string of characters (i.e., the claimed “plain text workflow query”) into the language model.” Par. 0006; “In other words, the purpose is to operate the semiconductor device evaluation device (i.e., the claimed “charged-particle microscope”) using natural language, etc.” Par. 0030; “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam.” Par. 0012; “The purpose of this disclosure is to input a desired operation (i.e., the claimed “plain text workflow query”) of a semiconductor device evaluation device (semiconductor evaluation tool 102 in FIG. 1) to the LLM, and to obtain an operation specification (evaluation recipe) that can realize the request (i.e., the claimed “plain text workflow query requests identification of how to perform a microscopy workflow on the charged-particle microscope”) as a response from the LLM (i.e., the claimed “plain text workflow query”). In other words, the purpose is to operate the semiconductor device evaluation device (i.e., the claimed “scanning electron microscope”) using natural language (i.e., the claimed “plain text workflow query requests identification of how to perform a microscopy workflow on the scanning electron microscope”), etc.” Par. 0030]
cause, in response to receipt of the plain text workflow query, the scanning electron microscope to capture, according to a default microscopy protocol, an image or an energy spectrum of a specimen that is currently loaded on a stage of the scanning electron microscope; [Ishikawa, see mapping applied to claims 1; Amthor, see mapping applied to claim 1; Amthor, “A response A from the user U to the follow-up query Q is processed by the large language model LLM in order to either use the adjusted microscope settings 40B to capture the new microscope image (i.e., the claimed “perform a specified microscopy action”) 50B, or to modify the adjusted microscope settings 40B again based on the response A from the user U.” Par. 0170; Ishikawa, “The present disclosure has been made in consideration of the above-described problems, and aims to provide a technology that enables instructions for an evaluation device that evaluates semiconductor devices to be obtained from a language model by inputting a string of characters (i.e., the claimed “plain text workflow query provided by a user”) into the language model.” Par. 0006; “The purpose of this disclosure is to input a desired operation (i.e., the claimed “plain text workflow query”) of a semiconductor device evaluation device (semiconductor evaluation tool (i.e., the claimed “access component that accesses a natural language query”) 102 in FIG. 1) to the LLM, and to obtain an operation specification (evaluation recipe) (i.e., the claimed “default microscopy protocol”) that can realize the request as a response from the LLM (i.e., the claimed “response to receipt of the plain text workflow query”).” Par. 0030]
execute a large language model on both the plain text workflow query and the image or energy spectrum of the specimen, [Ishikawa, see mapping applied to claim 1; Amthor, see mapping applied to claim 1; Amthor, “Imaging parameters such as illumination intensity or fluorescence settings can be ascertained by the large language model as a function of the textual input (i.e., the claimed “plain text workflow query”) and the overview image (i.e., the claimed “executes a large language model on both the plain text workflow query and the image of the specimen”) without the user having to specify the illumination intensity or fluorescence excitation or detection channels. This enables a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings.” Par. 0032]
thereby yielding a specimen-tailored plain text response to the plain text workflow query; and [Ishikawa, see mapping applied to claim 1; Amthor, see mapping applied to claim 1; Ishikawa, “The present disclosure has been made in consideration of the above-described problems, and aims to provide a technology that enables instructions for an evaluation device that evaluates semiconductor devices to be obtained from a language model by inputting a string of characters (i.e., the claimed “plain text workflow query”) into the language model (i.e., the claimed “large language model”).” Par. 0006; “The purpose of this disclosure is to input a desired operation (i.e., the claimed “plain text workflow query”) of a semiconductor device evaluation device (semiconductor evaluation tool 102 in FIG. 1) to the LLM, and to obtain an operation specification (evaluation recipe) that can realize the request (i.e., the claimed “plain text workflow query requests identification of how to perform a microscopy workflow on the charged-particle microscope”) as a response (i.e., the claimed “specimen-tailored plain text response”) from the LLM (i.e., the claimed “plain text workflow query”). In other words, the purpose is to operate the semiconductor device evaluation device (i.e., the claimed “scanning electron microscope”) using natural language (i.e., the claimed “plain text workflow query”), etc.” Par. 0030; Amthor, “Imaging parameters (i.e., the claimed “thereby yielding a specimen-tailored plain text response to the plain text workflow query”) such as illumination intensity or fluorescence settings can be ascertained by the large language model as a function of the textual input (i.e., the claimed “plain text workflow query”) and the overview image (i.e., the claimed “executes a large language model on both the plain text workflow query and the image of the specimen”) without the user having to specify the illumination intensity or fluorescence excitation or detection channels. This enables a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings.” Par. 0032]
visibly or audibly render the specimen-tailored plain text response on an electronic display or on an electronic speaker associated with the scanning electron microscope. [Ishikawa, see mapping applied to claims 1 - 2; Amthor, see mapping applied to claim 1]
Regarding Claim 18, Ishikawa in view of Amthor has been discussed above. The combination further teaches:
wherein the specimen-tailored plain text response describes a tutorial for performing the microscopy workflow on the specimen, [Ishikawa, see mapping applied to claim 6; Amthor, see mapping applied to claim 6]
wherein the tutorial omits one or more steps that are associated with the microscopy workflow but that the large language model infers are inapplicable or destructive to the specimen. [Ishikawa, see mapping applied to claim 6; Amthor, see mapping applied to claims 3, 6]
Regarding Claim 20, Ishikawa in view of Amthor has been discussed above. The combination further teaches:
wherein the specimen-tailored plain text response indicates that the microscopy workflow is inapplicable or destructive to the specimen and [Ishikawa, see mapping applied to claims 3, 5, 17; Amthor, see mapping applied to claims 3, 5, 17]
describes a tutorial for performing on the specimen an alternative microscopy workflow that the large language model infers is applicable or non-destructive to the specimen. [Ishikawa, see mapping applied to claims 3, 5; Amthor, see mapping applied to claims 3, 5]
Claims 4, 12, 19 are rejected under 35 U.S.C. 103(a) as being unpatentable over Ishikawa in view of Amthor and Vengroff et al., (U.S. Patent Application Publication 2017/0238751).
Regarding Claims 4 and 12, Ishikawa in view of Amthor has been discussed above. The combination further teaches:
wherein execution of the large language model produces synthesized code in addition to the specimen-tailored natural language response, [Ishikjawa, “Alternatively, if one has learned a specific programming language, they can describe a desired function in natural language and input it into the general-purpose LLM (i.e., the claimed large language model”) 210, which can output source code (i.e., the claimed “produce synthesized code”) that implements that function.” Par. 0029; “The program output (i.e., the claimed “synthesized code”) by the LLM (i.e., the claimed “large language model”) can be used in an evaluation device.” Par. 0066; “This allows the semiconductor LLM 232 to output a response (i.e., the claimed “specimen-tailored natural language response”) that reflects the domain knowledge when an input statement relating to an evaluation apparatus or semiconductor device is given (i.e., the claimed “receipt of the natural language workflow query”).” Par. 0043]
wherein the synthesized code defines one or more videographic visualizations associated with the microscopy workflow, and [Ishikawa, “The program output (i.e., the claimed “synthesized code”) by the LLM (i.e., the claimed “large language model”) can be used in an evaluation device.” Par. 0066; “The user interface 108 (i.e., the claimed “presenter component”) includes a display (not shown) and one or more input devices. The display can display image data output from the semiconductor evaluation tool 102, information output (i.e., the claimed “synthesized code”) from the processor 106, and the like (i.e., the claimed “synthesized code”).” Par. 0011
wherein the computer-executable components further comprise: a presenter component that runs the synthesized code, [Ishikawa, “Alternatively, if one has learned a specific programming language, they can describe a desired function in natural language and input it into the general-purpose LLM (i.e., the claimed large language model”) 210, which can output source code (i.e., the claimed “produce synthesized code”) that implements that function.” Par. 0029; “The user interface 108 (i.e., the claimed “presenter component”) includes a display (not shown) and one or more input devices. The display can display image data output from the semiconductor evaluation tool 102, information output from the processor 106 (i.e., the claimed “runs the synthesized code”), and the like.” Par. 0011]
thereby playing the one or more videographic visualizations on an electronic display associated with the charged-particle microscope. [Ishikawa, “The user interface 108 (i.e., the claimed “presenter component”) includes a display (not shown) and one or more input devices. The display (i.e., the claimed “electronic display”) can display image data (i.e., the claimed “videographic visualizations”) output from the semiconductor evaluation tool 102, information output from the processor 106 (i.e., the claimed “runs the synthesized code”), and the like.” Par. 0011; “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam.” Par. 0012]
The combination fails to explicitly teach videographic visualizations.
However, Vengroff teaches:
thereby playing the one or more videographic visualizations on an electronic display associated with the charged-particle microscope. [Vengroff, “In some examples, the electronic cookbook 30 may display text of the steps of a recipe alongside a video demonstration of the step (i.e., the claimed “playing the one or more videographic visualizations”),” Par. 0073; “The electronic cookbook 30 may utilize a display (i.e., the claimed “electronic display”) screen of the wireless device 14 (or any other device in communication range of the wireless device, such as a small projection display or a conveniently located display built into an appliance (e.g., a front panel display (FPD) on refrigerator)) or a virtual reality or augmented reality display device in use by a user to allow a user to easily view, receive, or play the recipe instructions (i.e., the claimed “playing the one or more videographic visualizations”).” Par. 0072]
Ishikawa, Amthor and Vengroff pertain to user interfaces/display systems and are analogous to the instant application. Accordingly, it would have been obvious to one of ordinary skill in the user interfaces/display systems art to modify Ishikawa’s teachings of “LLM 232 to output a response (i.e., the claimed “response to receipt of the natural language workflow query”)” of a “SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam” (Ishikawa, Par. 0012, Par. 0043) with the explicit teachings of “large language model as a function of the textual input (i.e., the claimed “natural language workflow query”) and the overview image (i.e., the claimed “model component that executes a large language model on both the natural language workflow query and the image of the specimen”)” (Amthor, Par. 0032) taught by Amthor and explicit teachings of “video demonstration of the step (i.e., the claimed “playing the one or more videographic visualizations”)” (Vengroff, Par. 0073) in order to “enable a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings.” (Amthor, Par. 0032) and “provide expert guidance” at “at various stages or at each stage” (Vengroff, Par. 0070, Par. 0071).
Regarding Claim 19, Ishikawa in view of Amthor and Vengroff has been discussed above. The combination further teaches:
wherein execution of the large language model produces synthesized code in addition to the specimen-tailored plain text response, [Ishikawa, see mapping applied to claim 17; Amthor, see mapping applied to claim 17; Ishikjawa, “Alternatively, if one has learned a specific programming language, they can describe a desired function in natural language (i.e., the claimed “plain text”) and input it into the general-purpose LLM (i.e., the claimed large language model”) 210, which can output source code (i.e., the claimed “produce synthesized code”) that implements that function.” Par. 0029; “The program output (i.e., the claimed “synthesized code”) by the LLM (i.e., the claimed “large language model”) can be used in an evaluation device.” Par. 0066; “This allows the semiconductor LLM 232 to output a response (i.e., the claimed “specimen-tailored plain text response”) that reflects the domain knowledge when an input statement relating to an evaluation apparatus or semiconductor device is given (i.e., the claimed “receipt of the natural language workflow query”).” Par. 0043]
wherein the synthesized code defines one or more videographic visualizations associated with the microscopy workflow, and [Ishikawa, see mapping applied to claim 4; Amthor, see mapping applied to claim 4; Vengroff, see mapping applied to claim 4]
wherein the program instructions are further executable to cause the processor to: [Ishikawa, see mapping applied to claim 1]
run the synthesized code, thereby playing the one or more videographic visualizations on the electronic display. [Ishikawa, see mapping applied to claim 4; Amthor, see mapping applied to claim 4; Vengroff, see mapping applied to claim 4]
Claims 7 - 8, 15 - 16 are rejected under 35 U.S.C. 103(a) as being unpatentable over Ishikawa in view of Amthor and Larson et al., (U.S. Patent Application Publication 2025/0328565), hereinafter referred to as Larson.
Regarding Claims 7 and 15, Ishikawa in view of Amthor has been discussed above. The combination further teaches:
wherein the computer-executable components further comprise: [Ishikawa, see mapping applied to claim 1]
one or more documents that are relevant to the natural language workflow query and to the image or energy spectrum of the specimen, and [Ishikawa, see mapping applied to claim 1; Ishikawa, “Fine tuning can be performed for domain knowledge (i.e., the claimed “documents”) related to the evaluation device by having the general-purpose LLM 210 learn document data (i.e., the claimed “one or more documents”),” Par. 0032; “The document creation system 200 includes: (a) an equipment supplier-side computer system 220 that performs fine tuning of the generic LLM 210 mainly based on domain knowledge (i.e., the claimed “documents”) of the evaluation equipment;” Par. 0022; “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform (i.e., the claimed “energy spectrum”) based on the detection of secondary electrons and backscattered electrons emitted from a sample (i.e., the claimed “specimen”) when the sample (i.e., the claimed “specimen”) is irradiated with an electron beam.” Par. 0012]
wherein the large language model receives as input the natural language workflow query, the image or energy spectrum of the specimen, and the one or more documents. [Ishikawa, “Fine tuning can be performed for domain knowledge (i.e., the claimed “documents”) related to the evaluation device by having the general-purpose LLM 210 (i.e., the claimed “large language model”) learn document data (i.e., the claimed “one or more documents”),” Par. 0032; “The document creation system 200 includes: (a) an equipment supplier-side computer system 220 that performs fine tuning of the generic LLM 210 (i.e., the claimed “large language model”) mainly based on domain knowledge (i.e., the claimed “documents”) of the evaluation equipment;” Par. 0022; “The purpose of this disclosure is to input a desired operation (i.e., the claimed “natural language workflow query”) of a semiconductor device evaluation device (semiconductor evaluation tool 102 in FIG. 1) to the LLM (i.e., the claimed “large language model”), and to obtain an operation specification (evaluation recipe) that can realize the request as a response from the LLM. In other words, the purpose is to operate the semiconductor device evaluation device using natural language, etc.” Par. 0030; “The semiconductor evaluation tool 102 is, for example, a CD-SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform (i.e., the claimed “energy spectrum”) based on the detection of secondary electrons and backscattered electrons emitted from a sample (i.e., the claimed “specimen”) when the sample (i.e., the claimed “specimen”) is irradiated with an electron beam.” Par. 0012]
The combination fails to teach a context component that identifies, via an embedding search of a document repository.
However, Larson teaches:
a context component that identifies, via an embedding search of a document repository, [Larson, “A scientific instrument (e.g., mass spectrometer, charged-particle microscope) can be any suitable computerized device that can capture or generate electronic measurements in a scientific, laboratory, research, or clinical operational context (e.g., that can capture or generate spectroscopic images or composition spectra).” Par. 0026; “a repository or database of document-graphs, each document-graph comprising respective context-tagged text blocks; composition of adjacent context-tagged text blocks via iterative graph-walking and embedding-change comparison;” Par. 0036; “In various instances, such searching can be accomplished via embedding techniques (i.e., the claimed “embedding search”) or via keyword techniques. In various cases, when a relevant (or potentially-relevant) context-tagged text block is found,” Par. 0038; “In various embodiments, the search component of the computerized tool (i.e., the claimed “context component”) can electronically store, maintain, control, or otherwise access a document-graph repository. In various aspects, the search component can electronically leverage the document-graph repository so as to identify a plurality of context-tagged text blocks that are substantively relevant to the plain text question.” Par. 0050]
Ishikawa, Amthor and Larson pertain to artificial intelligence systems and are analogous to the instant application. Accordingly, it would have been obvious to one of ordinary skill in the artificial intelligence systems art to modify Ishikawa’s teachings of “LLM 232 to output a response (i.e., the claimed “response to receipt of the natural language instruction”)” of a “SEM (Critical Dimension-Scanning Electron Microscope (i.e., the claimed “charged-particle microscope)), which is a device that generates an image and a luminance signal waveform based on the detection of secondary electrons and backscattered electrons emitted from a sample when the sample is irradiated with an electron beam” (Ishikawa, Par. 0012, Par. 0043) with the explicit teachings of “large language model as a function of the textual input (i.e., the claimed “natural language instruction”) and the overview image (i.e., the claimed “model component that executes a large language model on both the natural language instruction and the image of the specimen”)” (Amthor, Par. 0032) taught by Amthor and the teachings of “searching can be accomplished via embedding techniques (i.e., the claimed “embedding search”)” (Larson, Par. 0038) taught by Larson in order to “enable a high-quality imaging without requiring significant expertise of the user or a laborious performance of manual settings” (Amthor, Par. 0032) and automatically answer “inquiries regarding how the scientific instrument should be operated, maintained, serviced, or troubleshot.” (Larson, Par. 0027)
Regarding Claims 8 and 16, Ishikawa in view of Amthor and Larson has been discussed above. The combination further teaches:
wherein the computer-executable components further comprise: [Ishikawa, see mapping applied to claim 1]
a context component that executes one or more available deep learning models on the image or energy spectrum of the specimen, [Larson, “In various embodiments, there can be a LLM. In various aspects, the LLM can exhibit any suitable deep learning internal architecture (i.e., the claimed “deep learning model”).” Par. 0044; “A scientific instrument (e.g., mass spectrometer, charged-particle microscope) can be any suitable computerized device that can capture or generate electronic measurements in a scientific, laboratory, research, or clinical operational context (e.g., that can capture or generate spectroscopic images or composition spectra).” Par. 0026; “In various instances, such searching can be accomplished via embedding techniques (i.e., the claimed “embedding search”) or via keyword techniques. In various cases, when a relevant (or potentially-relevant) context-tagged text block is found,” Par. 0038; “In various embodiments, the search component of the computerized tool (i.e., the claimed “context component”/ “device”) can electronically store, maintain, control, or otherwise access a document-graph repository. In various aspects, the search component can electronically leverage the document-graph repository so as to identify a plurality of context-tagged text blocks that are substantively relevant to the plain text question.” Par. 0050]
thereby yielding one or more inferencing task results, [Larson, “This can ultimately cause the trainable internal parameters of the artificial intelligence model (e.g., of the LLM 306, of the text-to-graph neural network 806, of the named entity recognition neural network 812, of the re-ranker 1402) to become iteratively optimized for accurately performing its inferencing task (e.g., text synthesis, graph synthesis, named entity extraction, relevance score computation) (i.e., the claimed “inferencing task results”).” Par. 0219]
wherein the large language model receives as input the natural language workflow query, the image or energy spectrum of the specimen, and the one or more inferencing task results. [Ishikawa, see mapping applied to claim 1; Amthor, see mapping applied to claim 1; Larson, see mapping applied to claim 6; Larson, “This can ultimately cause the trainable internal parameters of the artificial intelligence model (e.g., of the LLM 306 (i.e., the claimed “natural language workflow”), of the text-to-graph neural network 806, of the named entity recognition neural network 812, of the re-ranker 1402) to become iteratively optimized for accurately performing its inferencing task (e.g., text synthesis, graph synthesis, named entity extraction, relevance score computation) (i.e., the claimed “inferencing task results”).” Par. 0219]
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Dobashi et al., (TW202303754A) teaches tutorials/recipes and workflows.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EUNICE LEE whose telephone number is 571-272-1886. The examiner can normally be reached M-F 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EUNICE LEE/Examiner, Art Unit 2656
/BHAVESH M MEHTA/ Supervisory Patent Examiner, Art Unit 2656