Prosecution Insights
Last updated: April 19, 2026
Application No. 18/629,462

SYSTEM AND METHOD OF CONVERTING DIFFRACTION PATTERN IMAGE FOR INTERCONVERTING SYNTHETIC TEM SADP IMAGE AND REAL TEM SADP IMAGE USING DEEP LEARNING

Non-Final OA §103§112
Filed
Apr 08, 2024
Examiner
BLACKSTEN, SYDNEY LYNN
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Lightvision Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
69.6%
+29.6% vs TC avg
§112
21.7%
-18.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. KR10-2021-0138141, filed on 10/18/2021 and parent Application No. KR10-2021-0171533, filed on 12/03/2021. Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action. 37 CFR 41.154(b) and 41.202(e). Failure to provide a certified translation may result in no benefit being accorded for the non-English application. Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/08/2024 is being considered by the examiner. The submission is in compliance with the provisions of 37 CFR 1.97. Specification The disclosure is objected to because of the following informalities: In paragraph [0002], line 2, “interconverting synthetic TEM SADP image” should read “interconverting a synthetic TEM SADP image.” In paragraph [0002], line 2, “and real TEM SADP image” should read “and a real TEM SADP image.” In paragraph [0002], lines 2-3, “using a deep learning” should read “using a deep learning algorithm/model” or “using deep learning.” In paragraph [0003], line 1, “central part of real TEM SADP image” should read “central part of a real TEM SADP image.” In paragraph [0003], line 6, “does not consider effect of the beam stopper” should read “does not consider the effect of the beam stopper.” In paragraph [0004], line 1, “such an error occurred due to misalign” should read “such an error occurring due to misalignment.” In paragraph [0004], lines 2-3, “an error occurred in an optical system and an error occurred in a process” should read “an error occurring in an optical system and an error occurring in a process.” In paragraph [0007], line 1, “a method of generating synthetic diffraction pattern” should read “a method of generating a synthetic diffraction pattern.” In paragraph [0014], line 2, “for interconverting synthetic TEM SADP image” should read “for interconverting a synthetic TEM SADP image” or “for interconverting synthetic TEM SADP images.” In paragraph [0014], line 2, “and real TEM SADP image” should read “and a real TEM SADP image.” In paragraph [0014], line 3, “by using a deep learning” should read “by using a deep learning model/algorithm” or “by using deep learning.” In paragraph [0015], line 3, “from real diffraction pattern image” should read “from a real diffraction pattern image.” In paragraph [0017], line 3, “remove unnecessary information from real diffraction pattern image” should read “remove unnecessary information from a real diffraction pattern image.” In paragraph [0018], line 3, “the program code being used for performing a method includes removing unnecessary information from real diffraction pattern image” should read “the program code being used for performing a method including removing unnecessary information from a real diffraction pattern image.” In paragraph [0018], line 5, “image belonging to real diffraction pattern” should read “image belonging to a real diffraction pattern.” In paragraph [0020], line 2, “TEM SADP image corresponding to synthetic TEM SADP image” should read “TEM SADP image corresponding to a synthetic TEM SADP image.” In paragraph [0020], line 4, “similar to real TEM SADP image” should read “similar to the real TEM SADP image.” In paragraph [0022], line 1, “example of real TEM SADP image” should read “example of a real TEM SADP image.” In paragraph [0023], line 1, “illustrating synthetic SADP image” should read “illustrating a synthetic SADP image.” In paragraph [0024], line 1, “illustrating an SADP image” should read “illustrating a SADP image.” In paragraph [0025], line 1, “illustrating an SADP image” should read “illustrating a SADP image.” In paragraph [0026], line 1, “illustrating an SADP image” should read “illustrating a SADP image.” In paragraph [0028], line 1, “illustrating an SADP image” should read “illustrating a SADP image.” In paragraph [0029], line 1, “lattice constant of material” should read “lattice constant of a/the material.” In paragraph [0030], line 1, “lattice constant of material” should read “lattice constant of a/the material.” In paragraph [0032], line 1, “unit lattice before aligned” should read “unit lattice before alignment.” In paragraph [0032], line 1, “unit lattice after aligned” should read “unit lattice after alignment.” In paragraph [0035], line 1, “interconverting real TEM SADP image” should read “interconverting a real TEM SADP image.” In paragraph [0035], lines 1-2, “and synthetic TEM SADP image” should read “and a synthetic TEM SADP image.” In paragraph [0039], line 1, “example of synthetic TEM SADP image” should read “example of a synthetic TEM SADP image” In paragraph [0040], line 1, “similar to real TEAM SADP image” should read “similar to a real TEM SADP image.” In paragraph [0040], line 2, “from synthetic TEM SADP image” should read “from a synthetic TEM SADP image.” In paragraph [0043], line 3, “the system may generate excellent TEM SADP image” should read “the system may generate an excellent TEM SADP image.” In paragraph [0043], line 6, “not being occurred” is unclear. In paragraph [0044], “by emitting an electron beam to material” should read “by emitting an electron beam to a material.” In paragraph [0044], line 2, “detect feature of the material” should read “detect features of the material” or “detect a feature of the material.” In paragraph [0046], lines 2-3, “an SADP image” should read “a SADP image.” In paragraph [0047], line 4, “not being occurred” is unclear. In paragraph [0050], line 2, “relative location of atom” should read relative location of an/the atom.” In paragraph [0051], line 2, “relative location of atom” should read relative location of an/the atom.” In paragraph [0051], line 3, “shape of thin plate” should read “shape of a thin plate.” In paragraph [0052], line 2, “with synthetic Eward sphere” should read “with a synthetic Eward sphere.” In paragraph [0052], line 3, “by using specific program” should read “by using a specific program.” In paragraph [0053], line 2, “reached to each of atoms” should read “reached to each of the atoms.” In paragraph [0054], line 1, “may generate synthetic SADP image” should read “may generate a synthetic SADP image.” In paragraph [0055], line 2, “generates adaptively synthetic SADP image” should read “generates adaptively a synthetic SADP image.” In paragraph [0055], line 4, “not be occurred” is unclear. In paragraph [0057], line 2, “identical to real SADP image” should read “identical to the/a real SADP image” or “identical to real SADP images.” In paragraph [0060], line 2, “applying mathematically” is unclear. In paragraph [0061], lines 2-3, “generate synthetic TEM SADP image” should read “generate a synthetic TEM SADP image.” In paragraph [0064], line 1, “lattice constant of material” should read “lattice constant of a/the material.” In paragraph [0064], line 2, “lattice constant of material” should read “lattice constant of a/the material.” In paragraph [0064], lines 4-5, “unit lattice before aligned” should read “unit lattice before alignment.” In paragraph [0064], line 5, “unit lattice after aligned” should read “unit lattice after alignment.” In paragraph [0065], line 2, “location of atom” should read “location of an/the atom.” In paragraph [0065], line 4, “from real SADP image” should read “from a/the real SADP image.” In paragraph [0066], line 1, “location of atom” should read “location of an/the atom.” In paragraph [0067], line 1, “by using inputted parameter” should read “by using an inputted parameter.” In paragraph [0067], line 2, “inputted parameter” should be preceded by “an” to read “an inputted parameter.” In paragraph [0067], line 2, “location of atom” should read “location of an/the atom.” In paragraph [0067], line 3, “inputted parameter” should be preceded by “an” to read “an inputted parameter.” In paragraph [0069], line 1, “location of atom” should read “location of an/the atom.” In paragraph [0073], line 5, “using specific program” should read “using a specific program.” In paragraph [0080], line 2, “reached to each of atoms” should read “reached to each of the atoms.” In paragraph [0093], line 2, “not be occurred” is unclear. In paragraph [0094], “discontinuity point occurred in a height direction” is unclear. In paragraph [0099], line 2, “slap” should read “slab.” In paragraph [0099], lines 3-4, “the diffraction pattern occurred from the discontinuity point” is unclear. In paragraph [00103], line 2, “to kind of the atom” should read “to the kind of the atom.” In paragraph [00105], line 3, “image process technique” should read “image processing technique.” In paragraph [00105], line 4, “interaction of atom” should read “interaction of the atom.” In paragraph [00105], line 5, “a SADP image may rapidly calculated” should read, “a SADP image may be rapidly calculated.” In paragraph [00107], line 2, “location of atom” should read “location of a/the atom.” In paragraph [00108], line 2, “using a deep learning” should read “using a deep learning model/algorithm.” In paragraph [00109], line 1, “interconverting real TEM SADP image” should read “interconverting a real TEM SADP image.” In paragraph [00109], line 2, “synthetic TEM SADP image” should be preceded with “a” to read “a synthetic TEM SADP image.” In paragraph [00115], line 4, “based on inputted parameter” should read “based on an inputted parameter.” In paragraph [00115], line 4, “and inputted parameter” should read “and an inputted parameter.” In paragraph [00116], line 4, “confusion resulted from” should read “confusion resulting from.” In paragraph [00117], line 2, “corresponding to real TEM SADP image” should read “corresponding to a real TEM SADP image.” In paragraph [00117], line 2, “in which unnecessary part” should read “in which the unnecessary part.” In paragraph [00118], lines 1-2, “may select synthetic SADP image” should read “may select a synthetic SADP image.” In paragraph [00119], lines 1-2, “may generate synthetic TEM SADP image” should read “may generate a synthetic TEM SADP image.” In paragraph [00120], line 6, “reached to atom” should read “reached to an/the atom.” In paragraph [00120], lines 6-7, “generating synthetic diffraction pattern image” should read “generating a synthetic diffraction pattern image.” In paragraph [00120], line 8, “location of atom” should read “location of an/the atom.” In paragraph [00120], line 9, “selecting synthetic diffraction pattern image” should read “selecting a synthetic diffraction pattern image.” In paragraph [00124], line 3, “through above process” should read “through the above process.” In paragraph [00132], line 2, “performable well its object” is unclear. In paragraph [00133], line 2, “a plenty of learning data set” is unclear. In paragraph [00133], line 2, “mode” should read “model.” In paragraph [00133], line 3, “applied to real application” should read “applied to a real application.” In paragraph [00134], line 2, “from collected real TEM SADP image” should read “from the collected real TEM SADP image.” In paragraph [00135], line 1, “various errors occurred in real TEM experiment may be simulated” is unclear. In paragraph [00136], line 2, “by reducing difference” should read “by reducing the difference.” Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: In Claim 1, the limitation “a real diffraction pattern image refining unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a real diffraction pattern generating unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a real diffraction generating unit in Paragraphs [00110-00114] and in Fig. 14, reference character 1400.) In Claim 1, the limitation “a synthetic diffraction pattern generating unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a synthetic diffraction pattern generating unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a synthetic diffraction pattern generating unit in Paragraph [00115] and in Fig. 14, reference character 1402.) In Claim 1, the limitation “a real-synthetic interconversion algorithm learning unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a real-synthetic interconversion algorithm learning unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a real-synthetic interconversion algorithm learning unit in Paragraph [00121] and in Fig. 14, reference character 1404.) In Claim 7, the limitation “a sample generating unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a sample generating unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a sample generating unit in Paragraph [0051] and in Fig. 6, reference character 602.) In Claim 7, the limitation “a vector generating unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a vector generating unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a HKL vector generating unit in Paragraph [0052] and in Fig. 6, reference character 604.) In Claim 7, the limitation “a light source generating unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a light source generating unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a light source generating unit in Paragraph [0053] and in Fig. 6, reference character 606.) In Claim 7, the limitation “a diffraction pattern generating unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a diffraction pattern generating unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a diffraction pattern generating unit in Paragraph [0054] and in Fig. 6, reference character 608) In Claim 7, the limitation “a selection unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a selection unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a selection unit in Paragraph [00120].) In Claim 11, the limitation “a real diffraction pattern discriminating unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a real diffraction pattern discriminating unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a real diffraction pattern discriminating unit in Paragraph [00128] and in Fig. 20.) In Claim 11, the limitation “a synthetic diffraction pattern discriminating unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a synthetic diffraction pattern discriminating unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a synthetic diffraction pattern discriminating unit in Paragraph [00129] and in Fig. 20.) In Claim 12, the limitation “a real diffraction pattern image refining unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a real diffraction pattern image refining unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a real diffraction pattern image refining unit in Paragraphs [00111-00114] and in Fig. 14, reference character 1400) In Claim 12, the limitation “a synthetic diffraction pattern image refining unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a synthetic diffraction pattern image refining unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a synthetic diffraction pattern image refining unit in Paragraph [00115] and in Fig. 14, reference character 1402.) In Claim 12, the limitation “an algorithm learning unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “an algorithm learning unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a real-synthetic algorithm learning unit in Paragraph [00121] and in Fig. 14, reference character 1404.) In Claim 13, the limitation “a real diffraction pattern image refining unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a real diffraction pattern image refining unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a real diffraction pattern image refining unit in Paragraphs [00111-00114] and in Fig. 14, reference character 1400) In Claim 13, the limitation “a synthetic diffraction pattern image refining unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “a synthetic diffraction pattern image refining unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a synthetic diffraction pattern image refining unit in Paragraph [00115] and in Fig. 14, reference character 1402.) In Claim 13, the limitation “an algorithm learning unit” has been interpreted under 112(f) as a means plus function limitation because of the combination of a non-structural term “an algorithm learning unit” and functional language “configured to” without reciting sufficient structure to achieve this function. (*Note: the specification discloses a real-synthetic algorithm learning unit in Paragraph [00121] and in Fig. 14, reference character 1404.) Because this/these claim limitation(s) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112b The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 7, 11, 13, and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. The Examiner strongly suggests that appropriate corrections be made to clarify the claim scope. With respect to Claim 1, the claim recites the following, each of which renders the claim indefinite: “an image” on line 9 (unclear antecedent basis); it is unclear whether the image recited on line 9 is the same or different from the image recited on line 8. “an image” on lines 9-10 (unclear antecedent basis); it is unclear whether the image recited on lines 9-10 is the same or different from the image recited on line 7. With respect to Claim 7, the claim recites the following, each of which renders the claim indefinite: “in unit lattice” on line 4 (unclear antecedent basis); it is unclear whether the unit lattice recited in line 4 of claim 7 is the same or different from the unit lattice recited in line 2 of claim 6. “atom in the sample” on line 11 (unclear antecedent basis); it is unclear whether the “atom” recited in line 11 of claim 7 is the same or different from the atom recited in line 9 of claim 7. With respect to Claim 11, the claim recites the following, each of which renders the claim indefinite: “a deep learning model” on lines 9-10 (unclear antecedent basis); “a deep learning model” on line 13 (unclear antecedent basis); “a deep learning model” on line 16 (unclear antecedent basis). It is unclear whether the deep learning models recited in lines 9-10, 13, and 16 are the same or different from the deep learning model recited in line 5 of claim 11. With respect to Claim 13, the claim recites the following, each of which renders the claim indefinite: “specific algorithm ” on line 3 (unclear to what this refers), it is unclear whether the “specific algorithm” recited in line 13 of claim 13 is the same of different from the “deep learning algorithm” recited in line 9 of claim 12. With respect to Claim 15, the claim recites the following, each of which renders the claim indefinite: “the unnecessary image” on line 9 (unclear antecedent basis) It is unclear whether the “unnecessary image” is the same or different from “the unnecessary information” recited in line 11 of claim 1. 35 USC § 101 The Examiner finds that the invention provides an improvement to the field of transmission electron microscopy (TEM) selected area diffraction pattern (SADP) imaging and is therefore eligible subject matter under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (Foreign Patent Pub. No. KR 10-2111124 B1, hereafter referred to as Lee) in view of Rao et al. (Foreign Patent Pub. No. WO 2021-096561 A1, hereafter referred to as Rao). Regarding Claim 1, Lee teaches a system for converting a diffraction pattern image (Paragraphs [0001-2], Lee teaches reconstructing a lens-less tomography diffraction image acquired from CMOS, CCD, etc., to obtain an image close to the original image.) comprising: a real diffraction pattern image refining unit configured to remove unnecessary information from a real diffraction pattern image (Fig. 2, reference character S21, Fig. 8, reference character 182a, Paragraph [0018], Lee teaches a pre-processing module that removes unnecessary noise present in the diffraction image of the sample.); PNG media_image1.png 868 1322 media_image1.png Greyscale PNG media_image2.png 864 1509 media_image2.png Greyscale a synthetic diffraction pattern generating unit configured to obtain a synthetic diffraction pattern image corresponding to the real diffraction pattern image in which the unnecessary information is removed (Fig. 1, reference character S22, Fig. 8, reference character 182b, Paragraphs [0029], [0042], [0064-68], Lee teaches capturing a background image and a diffraction image from the image sensor, loading the background image and diffraction image, removing background noise from the diffraction image, based on optical information, creating holographic images, performing an iterative phase reconstruction step for performing convolution on the frequency and time axes to make the image close to the original diffraction image using the image from which noise has been removed in the preprocessing step and repeating the convolution a set number of times, and a post-processing step for improving the image quality of the reconstructed image generated in the iterative phase reconstruction.). Lee does not explicitly disclose and a real-synthetic interconversion algorithm learning unit configured to generate an image belonging to a real diffraction pattern domain from an image belonging to a synthetic diffraction pattern domain or generate an image belonging to the synthetic diffraction pattern domain from an image belonging to the real diffraction pattern domain by using at least one of the real diffraction pattern image in which the unnecessary information is removed and the synthetic diffraction pattern image. Rao is in the same field of art of processing a real image to generate a simulated image that corresponds to the real image and processing a simulated image to generate a predicted image that corresponds to the simulated image. Further, Rao teaches a real-synthetic interconversion algorithm learning unit configured to generate an image belonging to a real diffraction pattern domain from an image belonging to a synthetic diffraction pattern domain (Paragraphs [0059-0061], Fig. 2, reference characters 180, 154, Rao teaches a Sim2Real generator model that a Sim2Real engine (154) can utilize to perform pixel-level adaptation techniques from a source image (e.g., simulated image) look like an image from the target domain (e.g., real-world environment) without any pairs of simulated-realistic images.) or generate an image belonging to the synthetic diffraction pattern domain from an image belonging to the real diffraction pattern domain by using at least one of the real diffraction pattern image in which the unnecessary information is removed and the synthetic diffraction pattern image (Paragraph [0061], Fig. 2, reference characters 180, 156, Rao teaches a Real2Sim generator model that the Real2Sim engine (156) uses to adapt an image from a target domain (e.g., real-world environment) to a source domain (e.g., simulated environment). Under Broadest Reasonable Interpretation, the Examiner interprets that “or” between the two claim limitations means only one of the limitations is required.). PNG media_image3.png 738 1169 media_image3.png Greyscale Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Lee by using a Sim2Real generator model to convert an image from a target domain, such as the real-world environment, to a source domain, such as a simulated environment, or by using a Real2Sim generator model to perform pixel-level adaptation techniques from a source image, such as a simulated image, to look like an image in the target domain, such as the real-world domain that is taught by Rao, to make the invention that converts a real diffraction pattern image into a synthetic diffraction pattern image or vice versa using a Real2Sim (or Sim2Real) generator model; thus, one of ordinary skilled in the art would be motivated to combine the references to mitigate the “reality gap” that exists between real and simulated environments. The reality gap can result in the generation of synthetic training data that does not accurately reflect what would occur in a real-world scenario, which can impact the performance of machine learning models trained on such simulated training data and require a significant amount of real-world training data to mitigate the reality gap (Rao, Paragraph [0002]) . Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. In regards to Claim 12, Lee discloses a system for converting a diffraction pattern image (Paragraphs [0001-2], Lee teaches reconstructing a lens-less tomography diffraction image acquired from CMOS, CCD, etc., to obtain an image close to the original image.) comprising: a real diffraction pattern image refining unit configured to remove unnecessary information from a real diffraction pattern image (Fig. 2, reference character S21, Fig. 8, reference character 182a, Paragraph [0018], Lee teaches a pre-processing module that removes unnecessary noise present in the diffraction image of the sample.); and a synthetic diffraction pattern generating unit configured to obtain a synthetic diffraction pattern image corresponding to the real diffraction pattern image in which the unnecessary information is removed (Fig. 1, reference character S22, Fig. 8, reference character 182b, Paragraphs [0029], [0042], [0064-68], Lee teaches capturing a background image and a diffraction image from the image sensor, loading the background image and diffraction image, removing background noise from the diffraction image, based on optical information, creating holographic images, performing an iterative phase reconstruction step for performing convolution on the frequency and time axes to make the image close to the original diffraction image using the image from which noise has been removed in the preprocessing step and repeating the convolution a set number of times, and a post-processing step for improving the image quality of the reconstructed image generated in the iterative phase reconstruction.). Lee does not explicitly disclose an algorithm learning unit configured to generate a diffraction pattern image belonging to a real diffraction pattern domain from a diffraction pattern image belonging to a synthetic diffraction pattern domain by using a deep learning algorithm learned with at least one of the real diffraction pattern image and the synthetic diffraction pattern image. Rao is in the same field of art of processing a real image to generate a simulated image that corresponds to the real image and processing a simulated image to generate a predicted image that corresponds to the simulated image. Further, Rao teaches an algorithm learning unit configured to generate a diffraction pattern image belonging to a real diffraction pattern domain from a diffraction pattern image belonging to a synthetic diffraction pattern domain by using a deep learning algorithm learned with at least one of the real diffraction pattern image and the synthetic diffraction pattern image (Paragraph [0005], Rao teaches a simulation-to-real machine learning model “Sim2Real” which generates predicted real images by processing the simulated images. The Sim2Real model can be trained using a simulated image, a corresponding predicted real image, and a corresponding predicted simulated image. Under Broadest Reasonable Interpretation, the Examiner interprets “at least one of” to mean only one of the real diffraction pattern image or the synthetic diffraction pattern image to be required.). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Lee by using a simulation-to-real machine learning model, trained using a simulated image and a corresponding predicted real image, to generate predicted real images by processing the synthetic images that is taught by Rao, to make the invention that generates real diffraction pattern images from synthetic diffraction pattern images using a simulation-to-real machine learning model; thus, one of ordinary skilled in the art would be motivated to combine the references since to generate simulated training data that better reflects what would occur in a real-world environment (Rao, Paragraph [0002]). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. In regards to Claim 13, Lee in view of Rao discloses the system of claim 12, wherein the algorithm learning unit generates the diffraction pattern image belonging to the real diffraction pattern domain from the diffraction pattern image belonging to the synthetic diffraction pattern domain by using specific algorithm when the real diffraction pattern image and the synthetic diffraction pattern image are prepared (Paragraph [0005], Rao teaches generating predicted real images by processing the simulated images using the Sim2Real model. The Sim2Real model is trained on a simulated image, a corresponding predicted real image, and a corresponding predicted simulated image. The Examiner interprets using a Sim2Real model trained on both real and synthetic images to be preparing the images since the claim is silent to the meaning of “preparing” the real and synthetic images.). In regards to Claim 14, Lee in view of Rao discloses a system for converting a diffraction pattern image (Paragraphs [0001-2], Lee teaches reconstructing a lens-less tomography diffraction image acquired from CMOS, CCD, etc., to obtain an image close to the original image.) comprising: a real diffraction pattern image refining unit configured to remove unnecessary information from a real diffraction pattern image (Fig. 2, reference character S21, Fig. 8, reference character 182a, Paragraph [0018], Lee teaches a pre-processing module that removes unnecessary noise present in the diffraction image of the sample.); and a synthetic diffraction pattern generating unit configured to obtain a synthetic diffraction pattern image corresponding to the real diffraction pattern image in which the unnecessary information is removed (Fig. 1, reference character S22, Fig. 8, reference character 182b, Paragraphs [0029], [0042], [0064-68], Lee teaches capturing a background image and a diffraction image from the image sensor, loading the background image and diffraction image, removing background noise from the diffraction image, based on optical information, creating holographic images, performing an iterative phase reconstruction step for performing convolution on the frequency and time axes to make the image close to the original diffraction image using the image from which noise has been removed in the preprocessing step and repeating the convolution a set number of times, and a post-processing step for improving the image quality of the reconstructed image generated in the iterative phase reconstruction.). Lee does not explicitly disclose and an algorithm learning unit configured to generate a diffraction pattern image belonging to a real diffraction pattern domain from a diffraction pattern image belonging to a synthetic diffraction pattern domain by using a deep learning algorithm learned with at least one of the real diffraction pattern image and the synthetic diffraction pattern image. Rao is in the same field of art of processing a real image to generate a simulated image that correspond to the real image and processing a simulated image to generate a predicted image that corresponds to the simulated image. Further, Rao teaches an algorithm learning unit configured to generate a diffraction pattern image belonging to a real diffraction pattern domain from a diffraction pattern image belonging to a synthetic diffraction pattern domain by using a deep learning algorithm learned with at least one of the real diffraction pattern image and the synthetic diffraction pattern image (Paragraphs [0038], [0093], Fig. 3A, reference character 323, Fig. 5, reference character 566, Rao teaches a real-to-simulation “Real2Sim” generator model used for processing real images to generate a predicted simulated image that corresponds to a real image. Fig. 5 shows a flow chart illustrating an example method of training a simulation-to-real generator model, which involves processing the simulated predicted real image with a real-to-simulation generator model to generate a predicted simulation image.). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Rao by using a real-to-simulation generator model for processing real images to generate a simulated image that corresponds to a real image that is taught by Rao, to make the invention uses a Real2Sim generator to convert a real diffraction pattern image into a synthetic diffraction pattern image; thus, one of ordinary skilled in the art would be motivated to combine the references since simulated predicted real images provide more accurate representations of real-world environments or scenarios when compared to raw simulated images generated by a simulator (Rao, Paragraph [0073]). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Claims 2 and 9-10 are rejected under 35 U.S.C. 103(a) as being unpatentable over Lee et al. (Foreign Patent Pub. No. KR 10-2111124 B1, hereafter referred to as Lee) in view of Rao et al. (Foreign Patent Pub. No. WO 2021-096561 A1, hereafter referred to as Rao) in further view of Henstra et al. (U.S. Patent No. 11,404,241 B2, hereafter referred to as Henstra). Regarding Claim 2, Lee in view of Rao teaches the system of claim 1. Lee in view of Rao does not explicitly disclose wherein the diffraction pattern image is a TEM (transmission electron microscope) SADP (selected area diffraction pattern) image. Henstra is in the same field of art of generating diffraction pattern images. Further, Henstra teaches wherein the diffraction pattern image is a TEM (transmission electron microscope) SADP (selected area diffraction pattern) image (Fig. 4, reference character 414, Col. 3, lines 30-39, Henstra teaches generating a transmission electron microscope (TEM) image of a sample by performing an electron diffraction such as Selected Area Electron Diffraction (SAED) using TEM techniques.). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Lee in view of Rao by acquiring a TEM SADP diffraction pattern image to perform real diffraction pattern to synthetic diffraction pattern conversion that is taught by Henstra, to make the invention that generates synthetic TEM SADP diffraction pattern images from real TEM SADP diffraction pattern images; thus, one of ordinary skilled in the art would be motivated to combine the references since the method of generating a synthetic diffraction image from a real diffraction image as described in Lee can be used in various optical imaging devices (Lee, Paragraph [0031]). In addition, switching between a TEM (transmission electron microscopy) and STEM (scanning transmission electron microscopy) imaging mode requires the user to switch out detectors, change the excitation of different lenses, and wait for beam shifts and focus drifts to fade, making it difficult and time consuming to switch such systems from one mode of operation to the other (Henstra, Col. 1, lines 15-27) and this could enable easy replacement of a light source and image sensor and reduce image processing time (Lee, Paragraphs [0024], [0006]). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. In regards to Claim 9, Lee in view of Rao in further view of Henstra discloses the system of claim 2, wherein the real-synthetic interconversion algorithm learning unit generates a diffraction pattern image belonging to the real diffraction pattern domain from a diffraction pattern image belonging to the synthetic diffraction pattern domain by using a deep learning model (Paragraphs [0059-0061], Rao teaches a Sim2Real generator model that a Sim2Real engine (154) can utilize to perform pixel-level adaptation techniques from a source image (e.g., simulated image) look like an image from the target domain (e.g., real-world environment).). In regards to Claim 10, Lee in view of Rao in further view of Henstra discloses the system of claim 9, wherein the real-synthetic interconversion algorithm learning unit generates the diffraction pattern image belonging to the real diffraction pattern domain from the diffraction pattern image belonging to the synthetic diffraction pattern domain by using specific algorithm when the real diffraction pattern image and the synthetic diffraction pattern image are prepared (Paragraphs [0059-0061], Rao teaches a Sim2Real generator model that a Sim2Real engine (154) can utilize to perform pixel-level adaptation techniques from a source image (e.g., simulated image) look like an image from the target domain (e.g., real-world environment). The Sim2Real model is trained on a simulated image, a corresponding real predicted image, and a corresponding predicted simulated image. The Examiner interprets that the “corresponding real predicted image” belongs to the real image domain.). Claim 15 is rejected under 35 U.S.C. 103(a) as being unpatentable over Rao (Foreign Patent Pub. No. WO 2021-096561 A1, hereafter referred to as Rao) in view of Lee (Foreign Patent Pub. No. KR 10-2111124 B1, hereafter referred to as Lee). Regarding Claim 15, Rao teaches a non-transitory computer readable medium storing a program code wherein the program code, when executed by a processor, is used for performing a method comprising (Paragraph [0019], Rao teaches a non-transitory computer readable storage medium storing instructions executable by one or more processors to perform a method.). Rao does not explicitly disclose removing unnecessary information from a real diffraction pattern image; generating a synthetic diffraction pattern image corresponding to the real diffraction pattern image in which the unnecessary information is removed; and generating an image belonging to a real diffraction pattern domain from an image belonging to a synthetic diffraction pattern domain or generating the image belonging to the synthetic diffraction pattern domain from the image belonging to the real diffraction pattern domain by using at least one of the real diffraction pattern image in which the unnecessary image is removed and the synthetic diffraction pattern image. Lee is in the same field of art of converting real images to synthetic images to reduce computational load. Further, Lee teaches removing unnecessary information from a real diffraction pattern image (Fig. 2, reference character S21, Fig. 8, reference character 182a, Paragraph [0018], Lee teaches a pre-processing module that removes unnecessary noise present in the diffraction image of the sample.); generating a synthetic diffraction pattern image corresponding to the real diffraction pattern image in which the unnecessary information is removed (Fig. 1, reference character S22, Fig. 8, reference character 182b, Paragraph [0042], Lee teaches generating a holographic image according to the information of the incident light, from which the noise is removed. The holographic image may be generated using a known holographic image generation algorithm.); and or generating the image belonging to the synthetic diffraction pattern domain from the image belonging to the real diffraction pattern domain by using at least one of the real diffraction pattern image in which the unnecessary image is removed and the synthetic diffraction pattern image (Paragraph [0064-68], Fig. 2, reference characters S1, S21, and S23, Lee teaches capturing a background image and a diffraction image from the image sensor, loading the background image and diffraction image, removing background noise from the diffraction image, based on optical information, creating holographic images. The Examiner interprets the diffraction image from the image sensor to be from the real diffraction pattern domain and the holographic image to be from the synthetic diffraction pattern domain since it is a generated image. Additionally, under Broadest Reasonable Interpretation, the Examiner interprets that the “or” separating the two limitations means that only one of the limitations is required.). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Rao by performing the method including the steps of removing noise from the real diffraction image, generating a holographic diffraction image corresponding to the real diffraction image with the noise removed, and generating an synthetic diffraction image from a real diffraction image using the real diffraction pattern image in which the noise has been removed that is taught by Lee, to make the invention that generates a synthetic diffraction image from a real diffraction image using a non-transitory computer-readable storage medium; thus, one of ordinary skilled in the art would be motivated to combine the references to automate the process of performing the real to synthetic image conversion method and enable rapid calculation processing (Rao, Paragraph [0019]). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Allowable Subject Matter Claims 3-8 and 11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding Claim 3 and dependents, No prior art teaches wherein unnecessary information is information concerning annotation, scale or index. In regards to Claim 5 and dependents, No prior art teaches wherein the synthetic diffraction pattern generating unit uses a TEM SADP simulation program, and wherein the TEM SADP simulation program generates a synthetic TEM SADP image corresponding to a real TEM SADP image based on inputted lattice constant and inputted unit lattice. In regards to Claim 7 and dependents, No prior art teaches wherein the synthetic diffraction pattern generating unit includes: a sample generating unit configured to generate a sample by using at least one of a parameter about lattice constant, a parameter about relative location of atom in unit lattice and a parameter about a zone axis; a vector generating unit configured to generate a reciprocal lattice vector corresponding to the unit lattice; a light source generating unit configured to calculate brightness of an electron beam reached to atom in the generated sample; a diffraction pattern generating unit configured to generate the synthetic diffraction pattern image by using the generated reciprocal lattice vector, location of atom in the sample and the calculated brightness of the electron beam; and a selection unit configured to select the synthetic diffraction pattern image corresponding to the real diffraction pattern image of the generated synthetic diffraction pattern image. In regards to Claim 11, No prior art teaches wherein the real-synthetic interconversion algorithm learning unit includes: a Real2Sim converting unit; a Sim2Real converting unit; a real diffraction pattern discriminating unit configured to learn a deep learning model for discriminating a synthetic diffraction pattern image converted through the Sim2Real converting unit from a diffraction pattern image photographed through real TEM, the synthetic diffraction pattern image being similar to the real diffraction pattern image; and a synthetic diffraction pattern discriminating unit configured to learn a deep learning model for discriminating the real diffraction pattern image converted through the Real2Sim converting unit from a synthetic diffraction pattern image generated through a simulation, the real diffraction pattern image being similar to the synthetic diffraction pattern, and wherein the Real2Sim converting unit learns a deep learning model for converting an image belonging to the real diffraction pattern domain to an image belonging to the synthetic diffraction pattern domain, and the Sim2Real converting unit learns a deep learning model for converting the image belonging to the synthetic diffraction pattern domain to the image belonging to the real diffraction pattern domain. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYDNEY L BLACKSTEN whose telephone number is (571)272-7651. The examiner can normally be reached 8:30am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached at 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SYDNEY L BLACKSTEN/Examiner, Art Unit 2674 /ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Apr 08, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month