DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is responsive to the request for continued examination (RCE), amendments and remarks received 27 May 2025. Claims 1 - 24 are currently pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 27 May 2025 has been entered.
Claim Objections
Claim 4 is objected to because of the following informalities: Lines 2 - 3 of claim 4 recite, in part, “forms a a continuous” which appears to contain a typographical error and/or a minor informality. The Examiner suggests amending the claim to --forms [[a]] a continuous-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 5 is objected to because of the following informalities: Lines 2 - 3 of claim 5 recite, in part, “forms a an image” which appears to contain a typographical error and/or a minor informality. The Examiner suggests amending the claim to --forms [[a]] an image-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim Interpretation
The Examiner asserts that the broadest reasonable interpretation, in view of the instant disclosure, of “a machine deep learning model” appears to encompass any machine learning model that utilizes deep learning at least because the instant specification discloses that supervised machine learning models may be used and that exemplary models of supervised learning include deep learning, see at least page 9 paragraphs 0051 - 0052 and page 10 paragraph 0059 of the instant specification. Therefore, for purpose of examination, claim limitations corresponding to “a machine deep learning model” are being interpreted as encompassing any machine learning model that utilizes deep learning.
The Examiner asserts that the broadest reasonable interpretation, in view of the instant disclosure, of “wherein the output neural network layer… directly outputs an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature” (emphasis added) appears to encompass interpretations wherein an image of an assist feature is obtained from information, one or more characteristics, of an assist feature output from a machine learning model at least because the instant specification discloses, at best, that “one or more characteristics 580 of one or more assist features for the portion 533 are obtained as output from the machine learning model 560” and that the “one or more characteristics 580 may include an image (pixelated, binary Manhattan, binary curvilinear, or continuous tone) of the assist features”, see at least page 13 paragraphs 0073 and 0075 of the instant specification. Therefore, for purpose of examination, claim limitations corresponding and/or related to “wherein the output neural network layer … directly outputs an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature” are being interpreted as encompassing interpretations wherein an image of the assist feature(s) is obtained from information output from a machine learning model.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
The rejections to claims 4 and 5 under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, are hereby withdrawn in view of the amendments and remarks received 27 May 2025.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 - 6, 8 - 16, 22 and 23 are rejected under 35 U.S.C. 101 because
the claimed invention is directed to a judicial exception, an abstract idea, without significantly more. The claims are directed towards obtaining/determining a characteristic of an assist feature for a portion of a design layout.
The claims recite, at a high level of generality, obtaining (determine)… a characteristic of an assist feature for the portion… based on at least a collective input of all the design layout data… and collective processing of all of the design layout data and directly output[ting] a collection of values of parameters of one or more functions representing an entire polygonal shape of the entire assist feature or directly output[ting] an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature.
The limitation of “obtaining (determine)… a characteristic of an assist feature for the portion… based on at least a collective input of all the design layout data… and collective processing of all of the design layout data”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of generic computer components. That is, other than reciting “by a hardware computer system,” (see claim 1), “using a machine deep learning model comprising a plurality of neural network layers including an output neural network layer” and “a non-transitory computer readable medium having instructions therein, the instructions, upon execution by a computer system, configured to cause the computer system to” (see claim 16), nothing in the claim element precludes the step from practically being performed in the mind. The Examiner asserts that the claim(s) do not provide any details nor limit how the machine deep learning model operates or how the characteristic of the assist feature is obtained/determined, and the plain meaning of “obtaining”/”determining” encompasses mental observations, evaluations, judgments and/or opinions, e.g., a user mentally deciding and/or mentally visualizing an assist feature to add to a portion of a design layout. Under its broadest reasonable interpretation when read in light of the specification, the “obtaining”/”determining” encompasses mental observations, evaluations, judgments and/or opinions that are practically performed in the human mind. For example, but for the recitation of the aforementioned generic computer components, the claimed obtaining/determining a characteristic of an assist feature for the portion encompasses a user observing and thinking about an image of design layout data of a portion of a design layout and performing an evaluation, judgment and opinion to mentally decide on (obtain/determine) and/or mentally visualize (obtain/determine) an assist feature to add to the portion of the design layout. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III).
Similarly, the limitation of directly output[ting] a collection of values of parameters of one or more functions representing an entire polygonal shape of the entire assist feature or directly output[ting] an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of generic computer components. That is, other than reciting “by a hardware computer system,” (see claim 1), “the output neural network layer” and “a non-transitory computer readable medium having instructions therein, the instructions, upon execution by a computer system, configured to cause the computer system to” (see claim 16), nothing in the claim element precludes the step from practically being performed in the mind. The Examiner asserts that the claim(s) do not provide any details nor limit how the machine deep learning model including its output neural network layer operates or how the collection of values of parameters of one or more functions representing an entire polygonal shape of the entire assist feature or the image containing pixel values that collectively define an entire polygonal shape of the entire assist feature is obtained/determined and/or output, and the plain meaning of “outputting” encompasses mental evaluations, judgments and/or opinions, e.g., a user mentally deciding on and/or mentally visualizing an assist feature to add to a portion of a design layout. Under its broadest reasonable interpretation when read in light of the specification, the “obtaining”/”determining” and/or “outputting” encompasses mental observations, evaluations, judgments and/or opinions that are practically performed in the human mind. For example, but for the recitation of the aforementioned generic computer components, the claimed directly output[ting] a collection of values of parameters of one or more functions representing an entire polygonal shape of the entire assist feature or directly output[ting] an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature encompasses a user observing and thinking about an image of design layout data of a portion of a design layout and performing an evaluation, judgment and opinion to mentally decide on (obtain/determine/output) and/or mentally visualize (obtain/determine/output) an assist feature to add to the portion of the design layout. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III).
This judicial exception is not integrated into a practical application. In particular, the claims recite additional elements of: “obtaining design layout data of a portion of a design layout, the design layout data comprising values of a plurality of pixels for the portion of the design layout”, “by a hardware computer system,” “using a machine deep learning model comprising a plurality of neural network layers including an output neural network layer” and “a non-transitory computer readable medium having instructions therein, the instructions, upon execution by a computer system, configured to cause the computer system to”.
The limitation of “obtaining design layout data of a portion of a design layout, the design layout data comprising values of a plurality of pixels for the portion of the design layout” is mere pre-solution activity, data gathering, recited at a high level of generality, and thus is insignificant extra-solution activity. See MPEP § 2106.05(g). In addition, all uses of the recited judicial exception require such data gathering, and, as such, the limitation does not impose any meaningful limits on the claims. The limitation amounts to necessary data gathering. See MPEP § 2106.05.
Further, the limitations of “by a hardware computer system,” and “a non-transitory computer readable medium having instructions therein, the instructions, upon execution by a computer system, configured to cause the computer system to” are recited at a high level of generality such that they amount to no more than mere instructions to apply the exception using generic computer components. Furthermore, the claims as a whole merely describe how to generally “apply” the concept of obtaining/determining a characteristic of an assist feature for a portion of a design layout in a computer environment. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. See MPEP § 2106.05(f).
Additionally, the limitation of “using a machine deep learning model comprising a plurality of neural network layers including an output neural network layer” provides nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP § 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Moreover, the machine deep learning model is used to generally apply the abstract idea without placing any limits on how the machine deep learning model functions. See MPEP 2106.05(f). Additionally, the recitation of “using a machine deep learning model comprising a plurality of neural network layers including an output neural network layer” merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element of “a machine deep learning model comprising a plurality of neural network layers including an output neural network layer” limits the identified judicial exception “obtaining/determining a characteristic of an assist feature for a portion of a design layout”, this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learning) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, the claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of: “obtaining design layout data of a portion of a design layout, the design layout data comprising values of a plurality of pixels for the portion of the design layout”, “by a hardware computer system,” “using a machine deep learning model comprising a plurality of neural network layers including an output neural network layer” and “a non-transitory computer readable medium having instructions therein, the instructions, upon execution by a computer system, configured to cause the computer system to” do not add a meaningful limitation to the abstract idea because they merely perform insignificant pre extrasolution activity, mere data gathering, and/or amount to no more than mere instructions to apply the abstract idea using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept.
In addition, with regards to dependent claims 2 - 6, 8 - 15, 22 and 23, the Examiner asserts that claims 2 - 6, 8 - 15, 22 and 23 are also directed to the abstract idea of obtaining/determining a characteristic of an assist feature for a portion of a design layout and merely further limit the abstract idea claimed in independent claims 1 and 16, for example by further identifying the design layout data obtained, by further identifying how the obtained/determined characteristic of the assist feature is defined/represented, by identifying further data that is obtained/determined with respect to the characteristic of the assist feature, by identifying further data gathering and/or outputting recited at a high level of generality corresponding to insignificant extra-solution activity, and/or by identifying, at a high level of generality, how the machine deep learning model is obtained. However, the Examiner asserts that a more detailed abstract idea remains an abstract idea and that none of the limitations of dependent claims 2 - 6, 8 - 15, 22 and 23 considered as an ordered combination provide eligibility because taken as a whole the claims merely instruct the practitioner to apply the abstract idea using generic computer components. The claims are not eligible.
Claims 17 - 21 and 24 are rejected under 35 U.S.C. 101 because
the claimed invention is directed to a judicial exception, an abstract idea, without significantly more. The claims are directed towards determining a characteristic of assist features based on a portion of a design layout or a characteristic of the portion of the design layout.
The claims recite, at a high level of generality, determining a characteristic of assist features based on the portion (of a design layout) or a characteristic of the portion, training… a machine deep learning model using training data comprising a sample whose feature vector comprises the characteristic of the portion and whose label comprises the characteristic of the assist features, wherein the trained machine deep learning model comprises a plurality of neural network layers including an output neural network layer, and, based on at least a collective input of a plurality of pixels for a design layout part… and collective processing of all of the plurality of pixels…, directly output[ting] a collection of values of parameters of one or more functions representing an entire polygonal shape of an entire assist feature to be determined… or directly output[ting] an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature to be determined.
The limitation of “determining a characteristic of assist features based on the portion (of a design layout) or a characteristic of the portion”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of generic computer components. That is, other than reciting “a non-transitory computer readable medium having instructions therein, the instructions, upon execution by a computer system, configured to cause the computer system to” (see claim 20), nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the recitation of the aforementioned generic computer components, the claimed determining a characteristic of assist features based on the portion (of a design layout) or a characteristic of the portion encompasses a user observing and thinking about an image of a portion of a design layout and performing an evaluation, judgment and opinion to mentally decide on (determine) and/or mentally visualize (determine) assist features to add to the portion of the design layout. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III).
Relatedly, the limitation of “training… a machine deep learning model using training data comprising a sample whose feature vector comprises the characteristic of the portion and whose label comprises the characteristic of the assist features, wherein the trained machine deep learning model comprises a plurality of neural network layers including an output neural network layer”, as drafted, is a process that, under its broadest reasonable interpretation, encompasses mathematical concepts that can be performed mentally and thus falls within the “Mathematical Concepts” grouping of abstract ideas.
Further, the limitation of “based on at least a collective input of a plurality of pixels for a design layout part… and collective processing of all of the plurality of pixels…, directly output[ting] a collection of values of parameters of one or more functions representing an entire polygonal shape of an entire assist feature to be determined… or directly output[ting] an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature to be determined”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of generic computer components. That is, other than reciting “by a hardware computer system,” (see claim 17), “the output neural network layer is configured to” and “a non-transitory computer readable medium having instructions therein, the instructions, upon execution by a computer system, configured to cause the computer system to” (see claim 20), nothing in the claim element precludes the step from practically being performed in the mind. The Examiner asserts that the claim(s) do not provide any details nor limit how the machine deep learning model including its output neural network layer operates or how the collection of values of parameters of one or more functions representing an entire polygonal shape of an entire assist feature to be determined or the image containing pixel values that collectively define an entire polygonal shape of the entire assist feature to be determined is determined and/or output, and the plain meaning of “outputting” encompasses mental evaluations, judgments and/or opinions, e.g., a user mentally deciding on and/or mentally visualizing an assist feature to add to a design layout part. Under its broadest reasonable interpretation when read in light of the specification, the “determining” and/or “outputting” encompasses mental observations, evaluations, judgments and/or opinions that are practically performed in the human mind. For example, but for the recitation of the aforementioned generic computer components, the claimed directly output[ting] a collection of values of parameters of one or more functions representing an entire polygonal shape of an entire assist feature to be determined or directly output[ting] an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature to be determined encompasses a user observing and thinking about an image of a design layout part and performing an evaluation, judgment and opinion to mentally decide on (determine/output) and/or mentally visualize (determine/output) an assist feature to add to the design layout part. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III).
The “determining…” and “… directly output[ting]…” limitations fall within the mental process grouping of abstract ideas, and the “training…” limitation falls within the mathematical concepts grouping of abstract ideas. However, the Examiner notes that, under such circumstances, the Supreme Court has treated such claims in the same manner as claims reciting a single judicial exception. Id. (discussing Bilski v. Kappos, 561 U.S. 593 (2010)). Thus, the “determining…”, “training…” and “… directly output[ting]…” limitations are considered together as a single abstract idea for further analysis.
This judicial exception is not integrated into a practical application. In particular, the claims recite additional elements of: “obtaining a portion of a design layout”, “by a hardware computer system,” “a machine deep learning model comprising a plurality of neural network layers including an output neural network layer”, “collective input of a plurality of pixels for a design layout part” and “a non-transitory computer readable medium having instructions therein, the instructions, upon execution by a computer system, configured to cause the computer system to”.
The limitations of “obtaining a portion of a design layout” and “collective input of a plurality of pixels for a design layout part” are mere pre-solution activity, data gathering, recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP § 2106.05(g). In addition, all uses of the recited judicial exception require such data gathering, and, as such, the limitations do not impose any meaningful limits on the claims. The limitations amount to necessary data gathering. See MPEP § 2106.05.
Further, the limitations of “by a hardware computer system,” and “a non-transitory computer readable medium having instructions therein, the instructions, upon execution by a computer system, configured to cause the computer system to” are recited at a high level of generality such that they amount to no more than mere instructions to apply the exception using generic computer components. Furthermore, the claims as a whole merely describe how to generally “apply” the concept of determining a characteristic of assist features based on the portion of a design layout or a characteristic of the portion in a computer environment. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. See MPEP § 2106.05(f).
Additionally, the limitation of “a machine deep learning model comprising a plurality of neural network layers including an output neural network layer”
provides nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP § 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Moreover, the machine deep learning model is used to generally apply the abstract idea without placing any limits on how the machine deep learning model functions. See MPEP 2106.05(f). Additionally, the recitation of “a machine deep learning model comprising a plurality of neural network layers including an output neural network layer” merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element of “a machine deep learning model comprising a plurality of neural network layers including an output neural network layer” limits the identified judicial exception “determining a characteristic of assist features based on the portion (of a design layout) or a characteristic of the portion, this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learning) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, the claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of: “obtaining a portion of a design layout”, “by a hardware computer system,” “a machine deep learning model comprising a plurality of neural network layers including an output neural network layer”, “collective input of a plurality of pixels for a design layout part” and “a non-transitory computer readable medium having instructions therein, the instructions, upon execution by a computer system, configured to cause the computer system to” do not add a meaningful limitation to the abstract idea because they merely perform insignificant pre extrasolution activity, mere data gathering, and/or amount to no more than mere instructions to apply the abstract idea using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept.
In addition, with regards to dependent claims 18, 19, 21 and 24, the Examiner asserts that claims 18, 19, 21 and 24 are also directed to the abstract idea of determining a characteristic of assist features based on the portion of a design layout or a characteristic of the portion and merely further limit the abstract idea claimed in independent claims 17 and 20, for example by further identifying the design layout obtained, by further identifying how the determined characteristic of the assist features is defined/represented, by identifying further data that is obtained/determined with respect to the characteristic of the assist features, and/or by identifying further data gathering and/or outputting recited at a high level of generality corresponding to insignificant extra-solution activity. However, the Examiner asserts that a more detailed abstract idea remains an abstract idea and that none of the limitations of dependent claims 18, 19, 21 and 24 considered as an ordered combination provide eligibility because taken as a whole the claims merely instruct the practitioner to apply the abstract idea using generic computer components. The claims are not eligible.
Response to Arguments
Applicant’s arguments with respect to claims 1, 2, 4, 6, 7, 9, 11 - 13, 15, 16, 22 and 23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant's arguments filed 27 May 2025 have been fully considered but they are not persuasive.
On pages 11 - 16 of the remarks the Applicant’s Representative argues that the cited portions of Xu et al., Jeong and Lin et al. do not disclose or teach the claimed features of claims 1, 17 and 20. In particular, the Applicant’s Representative argues that the “cited portions of Jeong do not appear to disclose or teach, for example, a output neural network layer of a machine deep learning model that, based on at least a collective input of all the design layout data (which includes values of a plurality of pixels for a portion of the design layout) to the machine deep learning model and collective processing of all of the design layout data by the machine deep learning model, directly outputs a collection of values of parameters of one or more functions representing an entire polygonal shape of the entire assist feature or directly outputs an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature”. The Applicant’s Representative argues that Jeong does not disclose or teach the aforementioned disputed claim limitation(s) at least because “Jeong’s model takes as input data regarding a single particular pixel of the target mask and then outputs ‘a degree of overlap between the pixel and a mask polygon,’ i.e. a single value representing whether or not a pixel of a mask polygon might be located at that pixel” and because “at no time does Jeong's model take a collective input of values of a plurality of pixels for a portion of the design layout and collectively processes of [sic] all the values of the plurality of pixels for the portion of the design layout.” Furthermore, the Applicant’s Representative argues that a degree of overlap between the pixel and mask polygon “is not reasonably the same or similar as ‘an image containing pixel values that collectively define an entire polygon shape of the entire assist feature,’ as claimed.” Moreover, the Applicant’s Representative argues that “it is not apparent how Jeong's model could directly output an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature when it only receives as input a feature vector of a particular pixel of a target mask” and is thus “missing information from all the other parts of the target mask.” Therefore, the Applicant’s Representative argues that the cited portions of Xu et al., Jeong and Lin et al. do not disclose or teach the aforementioned disputed claim limitation(s).
The Examiner respectfully disagrees.
Initially, in response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Furthermore, with regards to Jeong, the Examiner asserts that Jeong discloses an output neural network layer of a machine learning model that, based on at least input of all the design layout data (which includes values of a plurality of pixels for a portion of the design layout) to the machine learning model and processing of all of the design layout data by the machine learning model, directly outputs a collection of values of parameters of one or more functions representing an entire polygonal shape of the entire assist feature or directly outputs an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature, see at least figures 4 - 13, page 2 paragraph 0031, page 3 paragraph 0039, page 4 paragraph 0049 - page 5 paragraph 0055, page 5 paragraphs 0058 - 0060, page 6 paragraphs 0062 - 0070 and page 7 paragraphs 0073 - 0075 and 0078 of Jeong wherein it is disclosed that “mask 140 may include mask patterns including mask polygons corresponding to a circuit pattern or a device pattern, for example, an interconnection pattern, a contact pattern or a gate pattern to print the circuit pattern or the device pattern on the wafer WF. Mask polygons may be, for example, bitmaps or polygon layers that identify specific areas upon which further processing is to be performed” [0031], that “FIG. 8 is a view illustrating an example of a mask optimization method performed on a target mask in the mask optimization method of FIG. 4 according to example embodiments” [0049], that, with reference “to FIG. 6, a sample mask 500 including a third mask polygon 520 having a shape corresponding to a contact pattern may be optimized into a trainer mask 550 including a fourth mask polygon 570 further including an assist feature or an assist pattern by the assist feature method. The mask optimization of the sample mask 400 and 500 as shown in FIGS. 5 and 6 may be exemplary. In some embodiments, the mask optimization on the sample mask may be performed by various methods” [0051], that the “pixel-based learning using the trainer mask may be performed using an arbitrary machine learning. For example, the pixel-based learning may be performed using a linear learning, a non-linear learning or a neural network learning” [0058], that “referring to FIG. 7, the pixel-based learning may be performed by a supervised learning method of the neural network learning. For example, when the feature vector ‘qk’ of each pixel of the trainer mask is inputted to a mask optimization estimation model 600, the mask optimization estimation model 600 may output an output value (e.g., an estimated grey scale value ‘f(qk,PS)’) corresponding to the feature vector ‘qk’. The output value may be compared to the degree of overlap (e.g., a grey scale value) between each pixel and the mask polygon of the trainer mask, which is a target value ‘grk’ of each pixel” [0059], that the “mask optimization estimation model 600 may output the degree of overlap between each pixel and the mask polygon of the optimized mask when the feature vector ‘qk’ of each pixel of the mask before the optimization thereof is inputted” [0062], that, with reference “to FIG. 8, a feature vector of each pixel of a target mask 700 including a fifth mask polygon 720 may be extracted… when the feature vector of each pixel of the target mask 700 is inputted to the mask optimization estimation model 600 of FIG. 7, the mask optimization estimation model 600 may output a degree of overlap of a sixth mask polygon 770 of the optimized target mask 750 with respect to each of the pixels. A presence or absence of the sixth mask polygon 770 at each of the pixels may be determined based on the degree of overlap outputted from the mask optimization estimation model 600. The optimized target mask 750 including the sixth mask polygon 770 may be generated when the pixels at which the presence or absence of the sixth mask polygon 770 is determined are combined” [0064], that the “when the estimated grey scale value that is smaller than mask threshold value is outputted although [sic] it is desired to determine that a mask polygon exists for the pixel, and… when the estimated grey scale value that is greater than mask threshold value is outputted although [sic] it is desired to determine that the mask polygon does not exist for the pixel” [0074] and that in “operation S1070, a mask optimization for a target mask may be performed using the generated mask optimization estimation model and the mask threshold value. For example, a feature vector of each pixel of the target mask may be extracted, and the feature vector may be inputted to the mask optimization estimation model such that the estimated grey scale value may be obtained. The estimated grey scale value may be compared to the mask threshold value such that it is determined whether the mask polygon is present or not at each pixel of the target mask that is optimized. By combining the pixels at which the presence or absence of the mask polygon is determined, the optimized target mask may be generated using the mask optimization estimation model” [0075].
The Examiner asserts that, as shown herein above and in the cited portions, Jeong discloses that their mask optimization estimation model may be generated using machine learning, such as neural network learning, that a feature vector of each pixel of a target mask is input into their mask optimization estimation model in order to output a degree of overlap between each pixel and a mask polygon of an optimized mask as grey scale values, that the presence or absence of a mask polygon at each pixel is determined according to the output degrees of overlap and that the optimized target mask may be generated when the pixels at which the presence or absence of the mask polygon is determined are combined. The Examiner asserts that the estimated grey scale values for the pixels of the target mask output by the mask optimization estimation model of Jeong correspond to an image containing an entire shape of the entire assist feature at least because Jeong discloses that an optimized mask may include assist features, see for example figures 6 & 8, page 4 paragraphs 0049 - 0051 of Jeong, that masks may include mask polygons that may be for example bitmaps, see for example page 2 paragraph 0031 of Jeong, and that the pixels output by their mask optimization estimation model may be combined to generate the optimized target mask, see for example figures 6 & 8, page 6 paragraph 0064 and page 7 paragraph 0075 of Jeong, and because a continuous tone, ex. grayscale, digital image is an image composed of pixels each having a value corresponding to one of a minimum value, a maximum value or one of a plurality values between the minimum and maximum values which one of ordinary skill in the art would easily understand and recognize is represented/generated by the estimated grey scale values of the pixels of the target mask output by the mask optimization estimation model of Jeong. Furthermore, the Examiner asserts that the mask optimization estimation model of Jeong is not missing information from parts of the target mask at least because Jeong discloses that a feature vector of each pixel of the target mask before optimization is input into their mask optimization estimation model. Moreover, the Examiner asserts that the estimated grey scale value, representing a degree of overlap, for each pixel of a target mask before optimization directly output by the optimization estimation model of Jeong correspond to an image of an entire shape of an entire assist feature at least because figures 6 & 8 of Jeong, along with their corresponding descriptions, illustrate that sample mask 500 and target mask 700 are optimized by including additional mask polygons 570 and 770, assist features, in masks 500 and 700, respectively, because Jeong discloses that masks that are to be optimized may include mask polygons which may be, for example, bitmaps, see at least page 2 paragraph 0031 of Jeong, because Jeong discloses that masks are composed of a plurality of pixels in a two-dimensional array, see at least figures 6 - 12, page 3 paragraph 0039, page 5 paragraphs 0055 - 0057 and 0059 - 0060, page 6 paragraphs 0062 - 0069 and page 7 paragraphs 0073 - 0075 and 0078 of Jeong, and because one of ordinary skill in the art would easily understand and recognize that the estimated grey scale values for the pixels of a target mask output by the optimization estimation model of Jeong are representative of a continuous tone image. Lastly, the Examiner further asserts that the estimated grey scale values for the pixels of the target mask of Jeong correspond to an image of an entire shape of an entire assist feature at least because, as shown herein above and in the cited portions, Jeong discloses that an optimized version of an input mask may include additional mask polygons, assist features, not present in the input mask before optimization and that the estimated grey scale values represent degrees of overlap, i.e., differences, between pixels of the input mask before optimization and pixels of the optimized version of the input mask, see at least figures 6 - 8, 10 & 12, page 5 paragraphs 0053 - 0055 and 0059 - 0060, page 6 paragraphs 0062 - 0064 and 0067 - 0068 and page 7 paragraphs 0073 - 0075 and 0078 of Jeong, thus, since the estimated grey scale values of Jeong are representative of differences between the input mask before optimization and the optimized version of the input mask and since the input mask before optimization and the optimized version of the input mask only differ by the additional mask polygons, assist features, included in the optimized version of the input mask, the Examiner asserts that the estimated grey scale values for the pixels of the target mask in Jeong correspond to an image of an entire shape of an entire assist feature.
The Examiner notes that Jeong fails to disclose explicitly a machine deep learning model and wherein an output layer of the machine deep learning model directly outputs an image based on collective input and collective processing of all of the design layout data (which includes values of a plurality of pixels for a portion of the design layout).
However, the Examiner asserts that instant independent claims 1, 16, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. in view of Jeong in view of Lin et al. and that, at least, Xu et al. disclose an output layer of a machine learning model that, based on at least a collective input of all the design layout data (which includes values of a plurality of pixels for a portion of the design layout) to the machine learning model and collective processing of all of the design layout data by the machine learning model, directly outputs a collection of values of parameters of one or more functions representing an entire polygonal shape of the entire assist feature or directly outputs an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature”, see at least the abstract, page 162 left-hand column first-full paragraph - sixth-full paragraph, page 163 left-hand column “Definition 3” - page 164 section 4.1.2, page 163 figure 3, page 165 section 5.1 - section 5.2, page 165 figure 5, pages 166 - 167 section 6.2.1 - section 6.2.2, page 166 figure 7 and page 167 figures 8 and 9 of Xu et al. wherein it is disclosed that a “machine learning based framework is proposed for the SRAF generation”, that the “machine learning based SRAF generation framework works on a 2D grid plane with a specific grid size. The training data consist of a set of layout clips, where each layout clip includes a set of target patterns and model-based SRAFs”, that their “classification model is calibrated to predict the SRAF insertion at each grid of testing patterns”, that the “typical prediction with a binary classification model will be a label, i.e. 0 or 1, for each testing data. With the label prediction for each grid, clusters of grids will be labeled as 1, denoted as yellow grids, as shown in Fig. 5(a)”, that when “a classification model is calibrated, the probability of the label to be 1, denoted as p1, can be calculated for LGR and DTree as explained in Section 4.2. Then, a probability map on the 2D grid plane can be attained as shown in Fig. 5(b)” and that, as illustrated in figure 7, they “compare the SRAFs generated using different machine learning (ML) predictions, i.e. label predictions and predictions with probability maxima, followed by the SRAF simplification phase.” The Examiner asserts that the machine learning predictions generated by the machine learning model of Xu et al. correspond to images containing pixel values that collectively define an entire polygonal shape of the entire assist feature. Thus, the Examiner asserts that, at least, Xu et al. disclose collective input and collective processing of all of the design layout data by a machine learning model to output an image containing pixel values that collectively define an entire polygonal shape of the entire assist feature.
In addition, the Examiner asserts that Lin et al. disclose a machine deep learning model and an output neural network layer of the machine deep learning model that, based on at least a collective input of all input data to the machine deep learning model and collective processing of all of the input data by the machine deep learning model, directly outputs an image containing pixel values that collectively define an entire polygonal shape of an entire feature, see at least the abstract, figure 7, page 1 paragraphs 0002 and 0011 - 0012, page 4 paragraph 0044, page 4 paragraph 0049 - page 5 paragraph 0052 and page 8 paragraph 0085 - page 9 paragraph 0090 of Lin et al. wherein they disclose utilizing “deep learning techniques to segment images to select or delineate objects portrayed within digital images” and that a “deconvolution neural network 700 can receive as input a cropped portion of an input image. Upon receiving the digital image 702, the deconvolution neural network 700 processes the digital image 702 through a series of applied layers to generate an output map 704 (e.g., a probability map or a boundary map), as shown in FIG. 7.” Thus, the Examiner asserts that, at least, Lin et al. disclose collective input and collective processing of all image input data by a machine deep learning model to directly output an image containing pixel values that collectively define an entire polygonal shape of an entire feature.
Therefore, the Examiner asserts that Xu et al. in view of Jeong in view of Lin et al. disclose the aforementioned disputed claim limitation(s).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 2, 4 - 12 and 15 - 24 are rejected under 35 U.S.C. 103 as being unpatentable over Xioqing Xu, Tetsuaki Matsunawa, Shigeki Nojima, Chikaaki Kodama, Toshiya Kotani, David Z. Pan, “A Machine Learning Based Framework for Sub-Resolution Assist Feature Generation”, ACM, International Symposium on Physical Design, Apr. 2016, pages 161 - 168, herein referred to as “Xu et al.”, in view of Jeong U.S. Publication No. 2018/0095359 A1 in view of Lin et al. U.S. Publication No. 2017/0287137 A1.
- With regards to claim 1, Xu et al. disclose a me