Prosecution Insights
Last updated: April 19, 2026
Application No. 18/486,758

METHODS AND SYSTEMS FOR COMPRESSING SHAPE DATA FOR ELECTRONIC DESIGNS

Final Rejection §103§DP
Filed
Oct 13, 2023
Examiner
ROSARIO, DENNIS
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Center For Deep Learning In Electronics Manufacturing Inc.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
385 granted / 557 resolved
+7.1% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
34 currently pending
Career history
591
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§103 §DP
DETAILED ACTION Claim(s) 1,14,15 and 17,19,20,22 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with IDS cited Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder). Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Ho et al. (US 2020/0175216 A1): Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Ho et al. (US 2020/0175216 A1) as applied in the rejection of claim 2 further in view of ORTIZ et al. (US 2020/0036528 A1): Claim(s) 4,5,6,7 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of FUCHS et al. (WO 2019/183584 A1): Claim(s) 9 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Georgescu et al. (US 2022/0076411 A1) with Related U.S. Application Data: Provisional application No. 62/854,130, filed on May 29, 2019: Claim(s) 11,13 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Georgescu et al. (US 2022/0076411 A1) with Related U.S. Application Data: Provisional application No. 62/854,130, filed on May 29, 2019 as applied in claim 9 further in view of Goshen (EP 3 576 050 A1): Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Georgescu et al. (US 2022/0076411 A1) with Related U.S. Application Data: Provisional application No. 62/854,130, filed on May 29, 2019 as applied in claim 9 further in view of Goshen (EP 3 576 050 A1) as applied in claims 11,13 further in view of LV et al. (CN 110727247 A) with machine translation: Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of ABDELFATTAH et al. (US 2021/0081763 A1) with Foreign Application Priority Data: Sep 16, 2019 (GB)…..1913353.7: Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Zhang et al. (US 2018/0218492 A1): Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Georgescu et al. (US 2022/0076411 A1) with Related U.S. Application Data: Provisional application No. 62/854,130, filed on May 29, 2019 as applied in claims 9,21 further in view of Abrams et al. (US 2007/0184357 A1): Response to Amendment The amendment was received 1/8/2026. Claim 8 cancel claims 1,2,3,4,5,6,7,9,10,11,12,13,14,15,16 and 17,18,19,20,21,22,23 pending: PNG media_image1.png 794 155 media_image1.png Greyscale Priority PNG media_image2.png 1014 945 media_image2.png Greyscale Response to Arguments II. Double Patenting Rejections Applicant’s arguments, see remarks, page 6, filed 1/8/2026, with respect to double patenting have been fully considered and are persuasive. The double patenting rejections of claims 1-22 and 23 have been withdrawn. IV. Rejections under 35 USC 103 Applicant's arguments filed 1/8/2026 have been fully considered but they are not persuasive. Applicant’s state in page 8 of the remarks that CAO (WO 2021/052712 A1) no teach reproducing the received set of shape data. The examnr respectfuly disagree: CAO teach a set of shape images (CTM, REFMI/EFM) reproduced [0077][0106][0076]: [0077] In an embodiment, a machine learning model may be trained using a direct supervised learning. For example, the direct supervised learning flow uses a single neural network that is trained on a set of CTM images, and their corresponding reference characteristic pattern images or EFM images that have been generated using the best existing method e.g., a software implementing design rules. Once trained, a CTM image can be inserted as the input, and the trained machine learning model generates an EFM image. [0106] The output 606 of the first encoder model 605 is sent to the first decoder model 610 as an input. The first decoder model 610 is configured to generate a characteristic pattern EFM4 as an output. In other words, the first decoder model tries to reconstruct the original reference characteristic pattern (e.g., REFMl). The cost function for the first stage of the training includes a cost function which can be a function of a difference between the inputted reference characteristic pattern (e.g., REFMl) and the reconstructed EFM (e.g., EFM4). During the training process, model parameters of each of the first encoder model 605 and the first decoder model 610 are adjusted such that the cost function (e.g., difference between REFMl and EFM4) is reduced. In an embodiment, the cost function is minimized. Thereby, the trained decoder model 610’ will ensure a close match between the reference characteristic pattern and the generated characteristic pattern (e.g., EFM4). In other words, the trained decoder model 610’ ensures that for an input vector (e.g., ID vector), it generates a characteristic pattern that satisfies the design rules as well as meets the sharpness threshold of the features therein. In an embodiment, the decoder model (or pattern library) can be trained using a variational autoencoder, where the encoder outputs a ID vector related to the CTM, as well as a a statistical vector. In an embodiment, the training involves minimizing a statistical metric of the statistical vector as well. For example, the statistical metric is Kullback-Feibler (KF) divergence is a measure of how far the distributions are from a unit Gaussian distribution. In an embodiment, minimizing the KF divergence makes the distributions closer to a unit Gaussian distribution. PNG media_image3.png 908 1152 media_image3.png Greyscale [0076] The present methods can be implemented in several different computation or training flows. Each of these flows takes, as an input, a continuous transmission mask (CTM) or target mask image (MI). In the case of the CTM as input, the CTM may already been optimized to print the desired pattern. The output for each method is a characteristic pattern (also referred as an extraction friendly map (EFM)). In an embodiment, the characteristic pattern or EFM may be an image composed exclusively of rectangles that represents an optimized mask design. Applicants state that CAO does not teach compressing the set of shape data which comprises mask designs to reproduce the received set of shape data. The examiner respectfully disagrees since CAO teaches: compressing1 (via fig. 6B:615: “ENCODER 2”2) the (corresponding-reproduced) set3 (CTM6 via fig. 6B and EFM4 or REFM1 via fig. 6A) of shape (pattern) data which comprises mask designs (via said EFM being an “optimized mask design”, [0076] last S) to reproduce4 (via fig. 6B:610’: “Decoder2”5) the received (“EFM4” [0106]: shown above: 7th S) set6 (as a “close match”7, [0106]: shown above: 7th S, to REFM1: “reference characteristic pattern” [0106], 7th S of the matching set {EFM4, REFM1}) of shape (i.e., pattern) data. Applicants state that Yang (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) does not teach reproduce the received set of shape data. The examiner respectfully disagrees since Yang teaches: reproduce8 the received set (resulting in “the reconstructed9 topology set”10, 3.2.2 TCAE-Combine, last S, or produced again topology set: fig. 6(B): a set {T1,T2} of four reconstructed topologies of the original set {T1,T2} of fig. 6(a)) of (“on-track”, 2 PRELIMINARIES, 1st para, 2nd S) shape data: PNG media_image4.png 421 999 media_image4.png Greyscale PNG media_image5.png 1231 1420 media_image5.png Greyscale Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1,14,15 and 17,19,20,22 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with IDS cited Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder). PNG media_image6.png 802 346 media_image6.png Greyscale Re 1. (Currently Amended), CAO teaches via 62/900,887 A system for compression of shape data for a set (“input data” [0168]) of electronic (pattern-“mask” [0070]) designs, the system comprising: a computer processor (“of a computer system” [0169]) configured to receive a set (input data) of shape data (“by a designer” [0007] 2nd S), wherein the (input data) set of (designer) shape data comprises1112 (pattern-)mask (CTM) designs (via “the most accurate mask design method for generating assist features such as sub-resolution assist feature (SRAF) is a continuous transmission map (CTM) method” [0067]); and a computer (system) processor configured to encode the (input data) set of (designer) shape data, wherein the encoding (or “machine learning model” [0078] 1st S: fig. 4:405: “Generator”) compresses the set of shape data to produce (or generate) a (“corresponding” [0078] 1st S) set of encoded shape data; wherein the convolutional (machine-learning) autoencoder (405) comprises a set (input data) of adjusted parameters (“of the encoder model” [0011] last S) comprising weights (“and biases” [0084] 5th S), wherein the (input data) set of (encoder) adjusted parameters has been tuned (“of the generator model” [0087] 2nd S) for increased accuracy of the (input data) set of (machine-learning) encoded (designer) shape data using the weights (& biases), wherein the (neural-net) weights (& biases) are used to determine (via a cost/loss function) what (input training data set) information to keep13 (as having or relation keeping “two parts”14, [0082] 2nd S, for use) based15 on1617 design rules (“in order to create functional design layouts/patterning devices” [0043] 3rd S) for the (input training data) set of electronic (pattern-mask) designs (of a designer); and a computer processor (“of a computer system” [0170]: fig. 12:104,105) configured to decode (via “procedures of the methods discussed above can be implemented on one or more processors of a computer system” [0170]: fig. 6A:610: “Decoder1”; fig. 6B:610’: “Trained Decoder2”) the set (or “a combination of CTM and a reference characteristic pattern as input data set for training a machine learning model (e.g., 405) can be a separate embodiment.” [0168] 2nd S: fig. 6A: “REFM1/input” input data set; fig. 6B:”CTM6”: input data set) of encoded shape data into decoded shape data (i.e., output of each decoder 610,610’) using the convolutional (“neural network” [0148]) autoencoder (via “The training in Figures 6A and 6B, changes the first stage of the GAN to an autoencoder.” [0104] last S), wherein the decoded (“CTM shapes” [0075] 2nd to last S: fig. 6B: “CTM6” shapes) shape data reproduces (“to reconstruct the original reference characteristic pattern (e.g., REFMl)” [0106] 3rd S) the received set of shape data within a pre-determined (“sharpness” [0104] 1st S) threshold (via: PNG media_image7.png 604 992 media_image7.png Greyscale PNG media_image8.png 1094 992 media_image8.png Greyscale CAO does not teach the difference18 of claim 1 of: (information to keep)19 based on (design rules). Yang teaches “keep” (i.e., “unchanged”, 4th pg, lcol, 2nd, 3rd para, last S) and “based on” (design rules) (i.e., given a set of legal—normal quality-- patterns as a basis for calculation via “Given a set of layout design rules”2021, 2nd page, rcol, Problem 1): PNG media_image9.png 1583 1192 media_image9.png Greyscale Since CAO teaches an autoencoder, one of skill in autoencoders can make CAO’s be as Yuang’s predictably recognizing the change to “effectively generate diverse pattern libraries with DRC-clean patterns compared to a state-of-the-art industrial layout pattern generator.”, Yuang, abstact, last S: PNG media_image10.png 1543 1074 media_image10.png Greyscale 14. The system of claim 1, wherein the set of shape data further comprises simulated mask designs (via CAO: “mask layout and design” “simulation” [0053]). 15. The system of claim 1, wherein the design rules comprise a minimum line width (“or hole”, CAO: [0043] 6th S) or a minimum line-to-line spacing. Claim 17 is rejected like claim 1: Re 17. (Original), CAO of the combination of CAO,Yang teaches A system for training a convolutional autoencoder for compression of shape data for a set of electronic designs, the system comprising: a computer processor configured to receive a (data-)set of shape data, wherein the set of shape data comprises mask designs; a computer processor configured to receive a set of (weight) parameters including a set of convolution layers for the convolutional autoencoder, wherein the set of parameters is determined using design rules for the set of electronic designs, and wherein the set of parameters comprises weights; a computer processor configured to encode the set of (designer) shape data to compress the set of shape data to produce a (corresponding vector) set of encoded shape data; and a computer processor configured to adjust the set of (weight) parameters, wherein the adjusted set of (weight) parameters comprises the set of (generator) parameters tuned for increased accuracy of the set of encoded shape data, and wherein the adjusted set of (weight & bias) parameters comprises adjusted weights to retain (via a loss/cost function having or possessing 2 parts) important information (i.e., a prominent “feature” “input” [0126] 2nd S) needed (via adjusting important features to that which is needed “based on these heuristic rules”22 [0075] 1st S), based on the design rules (i.e., normal-design features--squares & blocks--as input to the autoencoder) for the (data-)set of electronic (mask) designs, to reproduce (or generate) the received set of (designer) shape (input) data. 19. The system of claim 17, further comprising a computer processor configured to decode (via said squares & blocks autoencoder) the set of encoded shape data into decoded data, using the convolutional autoencoder. 20. The system of claim 19, further comprising a computer processor configured to calculate a loss (fig. 5B,6A,6B: “CF”: Cost Function: [0132]: “loss function”) by comparing the decoded data with the received set of shape data. Claim 22 is rejected like claim 15: 22. The system of claim 17, wherein the design rules comprise a minimum line width or a minimum line-to-line spacing: 15. The system of claim 1, wherein the design rules comprise a minimum line width (“or hole”, CAO: [0043] 6th S) or a minimum line-to-line spacing. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Ho et al. (US 2020/0175216 A1): PNG media_image11.png 802 346 media_image11.png Greyscale Re 2., CAO of the combination of CAO and Yang teaches The system of claim 1, wherein the encoding using the convolutional autoencoder comprises a flattening step followed by an (identity-matrix) embedding step, the (identity-matix) embedding step involving a fully-connected embedding layer which outputs a one-dimensional (“1D”, CAO: pg. 3, last S) vector. CAO of the combination of CAO and Yang does not teach: a flattening step followed by… a fully-connected embedding layer. Ho teaches: a flattening (“placed chip data” [0050]) step followed by (processing “the partially placed chip data using a feed-forward neural network, e.g., a multi-layer perceptron (MLP), to generate the partially placed chip embedding” [0051])… a fully-connected embedding layer (comprised by said feed-forward neural networks, a “fully connected”, [0047] last S: MLP). Since CAN teaches a layout, one of skill in plans can make CAN’s be as Ho’s predictably recognizing the change to “exhibit improved performance, e.g., have one or more of lower power consumption, lower latency, or smaller surface area, than one designed using a conventional design process, and/or be producible using fewer resources.”, Ho [0040]. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Ho et al. (US 2020/0175216 A1) as applied in the rejection of claim 2 further in view of ORTIZ et al. (US 2020/0036528 A1): PNG media_image12.png 802 479 media_image12.png Greyscale CAO of the combination of CAO, Yang, Ho teaches: 3. The system of claim 2, wherein the one-dimensional (1D) vector comprises 256 elements. CAO of the combination of CAO, Yang, Ho does not teach “256 elements”. ORTIZ teaches “256 elements” (or “256 floating point numbers” [0171] last S). Since CAO of the combination of CAO, Yang, Ho teaches a vector, one of skill in the art of vectors can make CAO of the combination of CAO, Yang, Ho be as ORTIZ’ predictably recognizing the change being “meaningful”, ORTIZ [0171], last S. Claim(s) 4,5,6,7 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of FUCHS et al. (WO 2019/183584 A1): PNG media_image13.png 802 483 media_image13.png Greyscale Re 4., CAO of the combination of CAO,Yang teaches The system of claim 1, wherein the convolutional autoencoder (405) comprises a pre-determined set of convolution layers, including a kernel size and a filter size for each convolution layer in the pre-determined set of convolution layers. CAO of the combination of CAO,Yang does not teach “a pre-determined set of convolution layers, including a kernel size and a filter size for each convolution layer in the pre-determined set of convolution layers”. FUCHS teaches: a pre-determined set of convolution (“transform” [0006] 2nd S) layers, including a kernel size (“ranging from 1 to 64” [0090] 1st & 2nd Ss) and a filter size (“ranging from 1 to 64” [0090] 1st & 2nd Ss) for each convolution layer in the pre-determined set of convolution layers. Since CAO of the combination of CAO,Yang teaches a Convolutional neural network (CNN), one of skill in the art of CNNs can make CAO’s of the combination of CAO,Yang be as FUCHS’ predictably recognizing the change “to improve performance of the reconstruction”, FUCHS [0008]. Re 5. The system of claim 4, wherein the pre-determined set of convolution layers comprises (as shown in FUCHS’ fig. 10B, reproduced, below): a first convolution layer using a first 5x5 kernel; a second convolution layer following the first convolution layer and using a second 5x5 kernel; a third convolution layer following the second convolution layer and using a first 3x3 kernel; and a fourth convolution layer following the third convolution layer and using a second 3x3 kernel. Re 6. The system of claim 5, wherein (as shown in FUCHS’ fig. 10B, reproduced, below): the first, second, third and fourth convolutional layers use filter sizes of 32, 64, 128 and 256, respectively. Re 7. The system of claim 5, wherein (as shown in FUCHS’ fig. 10B, reproduced, below): a stride of 2 is used in each of the four convolution layers. PNG media_image14.png 853 1016 media_image14.png Greyscale PNG media_image14.png 853 1016 media_image14.png Greyscale Claim 18 is rejected like claims 4 and 7: 18. The system of claim 17, wherein the set of parameters comprises at least one of: a kernel size, a stride value and a filter size for each convolution layer: Re 4., CAO of the combination of CAO,Yang teaches The system of claim 1, wherein the convolutional autoencoder (405) comprises a pre-determined set of convolution layers, including a kernel size and a filter size for each convolution layer in the pre-determined set of convolution layers. Re 7. The system of claim 5, wherein (as shown in FUCHS’ fig. 10B, reproduced, above): a stride of 2 is used in each of the four convolution layers. Claim(s) 9 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Georgescu et al. (US 2022/0076411 A1) with Related U.S. Application Data: Provisional application No. 62/854,130, filed on May 29, 2019: PNG media_image15.png 802 508 media_image15.png Greyscale Re 9., Yang of the combination of CAO,Yang teaches The system of claim 1, wherein: the (corresponding) set of (designer) shape data comprises a grid (“aligned at shape edges, as shown in Figure 3”, 2nd page, 3.1 Squish Pattern Extraction, 1st para, 1st S) of tiles (comprising “pixel intensities”, 3rd page, 3.2.1 Transforming Convolutional Auto-Encoder (TCAE), 1st para, last S) decomposed (or cut) from a larger (“scan line-based”, 2nd page, 3.1 Squish Pattern Extraction, 1st para, 1st S) image; and the encoding comprises encoding the (cut) grid of tiles on a tile-by-tile basis, forming an encoded (cut) grid of (intensity) tiles. Yang of the combination of CAO,Yang does not teach “tile-by-tile”. Georgescu teaches “tile-by-tile” ( “i.e.,…broken up into a 2D grid of tiles” [00150] 1st S). Since Yang of the combination of CAO,Yang teaches a grid, one of skill in the art of grids can make Yang’s be as Georgescu’s predictably recognizing the change “reduces the memory requirements of the visualization application to manageable levels” (Georgescu [00150] 2nd S): PNG media_image16.png 1082 1072 media_image16.png Greyscale Claim 21 is rejected like claim 9: 21. The system of claim 19 wherein: the set of shape data comprises a grid of tiles decomposed from a larger image; and the encoding and the decoding comprise encoding and decoding the grid of tiles on a tile- by-tile basis: Re 9., Yang of the combination of CAO,Yang teaches The system of claim 1, wherein: the (corresponding) set of (designer) shape data comprises a grid (“aligned at shape edges, as shown in Figure 3”, 2nd page, 3.1 Squish Pattern Extraction, 1st para, 1st S) of tiles (comprising “pixel intensities”, 3rd page, 3.2.1 Transforming Convolutional Auto-Encoder (TCAE), 1st para, last S) decomposed (or cut) from a larger (“scan line-based”, 2nd page, 3.1 Squish Pattern Extraction, 1st para, 1st S) image; and the encoding comprises encoding the (cut) grid of tiles on a tile-by-tile basis, forming an encoded (cut) grid of (intensity) tiles. Claim(s) 11,13 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Georgescu et al. (US 2022/0076411 A1) with Related U.S. Application Data: Provisional application No. 62/854,130, filed on May 29, 2019 as applied in claim 9 further in view of Goshen (EP 3 576 050 A1): PNG media_image17.png 802 640 media_image17.png Greyscale Re 11., Georgescu of the combination of CAO,Yang,Georgescu teaches via 62/854,130 The system of claim 9, further comprising: a computer processor configured to determine an error value for a (broken) tile in the encoded (cut) grid of tiles; and a computer processor configured to output a (broken) tile in the (cut) grid of (broken) tiles instead of the (broken) tile in the encoded (cut) grid of (broken) tiles when the error value of the (broken) tile in the encoded (cut) grid of tiles is greater than a pre-determined (“adaptive” [00117]) threshold. Georgescu of the combination of CAO,Yang,Georgescu does not teach: “an error value… instead… the error value… is greater than a pre-determined”. Goshen teaches: an error value (i.e., “error threshold” c.24, rcol, ll. 55-57)… instead (via the decision diamonds in fig. 6:S620,S630 that can go this way or instead that way)… the error (threshold) value… (“If”) is greater than a pre-determined (“the optimization enters a new iteration cycle, and so on” c.25, 2nd S). Since Georgescu of the combination of CAO,Yang,Georgescu teaches training, one of skill in the art of training can make Georgescu’s of the combination of CAO,Yang,Georgescu be as Goshen’s predictably recognizing the change “achieving high throughput and quick training of the MLC” (Machine Learning Component), Goshen, c.28 [0160] last S: PNG media_image18.png 1675 1071 media_image18.png Greyscale Re claim 13., CAO of the combination of CAO, Yang, Georgescu,Goshen teaches The system of claim 11, wherein the (threshold) error value is based on a difference (of said error) in dose23 (“control” [0054] 8th S) energy (i.e. gray radiation) to (integrated circuits) manufacture the (corresponding) set of (designer) shape (input) data on a (“a matrix-addressable” [0047] 1st bullet. 1st S) surface, wherein the difference (of said error) in dose (control) energy is based on the design rules (i.e., normal quality patterns). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Georgescu et al. (US 2022/0076411 A1) with Related U.S. Application Data: Provisional application No. 62/854,130, filed on May 29, 2019 as applied in claim 9 further in view of Goshen (EP 3 576 050 A1) as applied in claims 11,13 further in view of LV et al. (CN 110727247 A) with machine translation: PNG media_image19.png 802 675 media_image19.png Greyscale Re claim 12., Goshen of the combination of CAO, Yang, Georgescu,Goshen teaches The system of claim 11, wherein the (threshold) error value is based on a distance criterion to manufacture the (corresponding) set of (designer) shape data on a surface, wherein the distance criterion is based on the design rules (i.e., normal quality patterns). Goshen does not teach: A distance criterion to manufacture…. A surface… The distance criterion. LV teaches A distance criterion to manufacture (pg. 20, last txt blk)…. A surface (pg. 31, penult txt blk)… The distance criterion. Since CAO teaches manufacturing integrated circuits, one of skill in the art of manufacturing can make CAO’s of the combination of CAO, Yang, Georgescu,Goshen be as LV’s predictably recognizing the change resulting in “improved” “defect” “factory” “feedback”, LV, pg. 35, 2nd txt blk, correcting/adjusting for defects: PNG media_image20.png 1823 1127 media_image20.png Greyscale PNG media_image21.png 3051 1129 media_image21.png Greyscale Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of ABDELFATTAH et al. (US 2021/0081763 A1) with Foreign Application Priority Data: Sep 16, 2019 (GB)…..1913353.7: PNG media_image22.png 802 677 media_image22.png Greyscale Re 16., CAO of the combination of CAO,Yang teaches The system of claim 1, wherein the (corresponding) set of adjusted (weight & bias) parameters is tuned for increased accuracy in a tradeoff of compression ratio and accuracy gain. CAO of the combination of CAO,Yang does not teach “a tradeoff of compression ratio and accuracy gain”. Abdelfattah teaches via Foreign Application Priority Data: Sep 16, 2019 (GB)…..1913353.7: a tradeoff of compression ratio (“vs. compression speed”, pg. 4 [012], last S) and accuracy gain (or “accuracy” “gains”, pg. 26 [126]). Since CAO of the combination of CAO,Yang teaches compression, one of skill in the art of compression can make CAO’s of the combination of CAO,Yang be as Abdelfattah’s predictably recognizing the change gaining in accuracy and efficiency. Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Zhang et al. (US 2018/0218492 A1): PNG media_image23.png 811 677 media_image23.png Greyscale Re 23.. CAO of the combination of CAO,Yang teaches The system of claim 17, wherein the (corresponding) set of (designer) shape (input) data further comprises simulated scanning electron microscope (SEM) images (via “simulate or mathematically model any generic imaging system” [0199]). CAO of the combination of CAO,Yang does not teach “simulated scanning electron microscope (SEM) images”. Zhang teaches “simulated scanning electron microscope (SEM) images” (“represent edges of the patterns on wafer” [0063] penult S). Since CAO of the combination of CAO,Tang teaches simulating, one of skill in the art of simulation can make CAO’s of the combination of CAO,Yang be as Zhang’s predictably recognizing the change “to further improve the quality in any stage of generation”, Zhang [0024] last S, of integrated circuit reference images. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (WO 2021/052712 A1) with Priority Data: 62/900,887 16 September 2019 (16.09.2019) US in view of Yang et al. (DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder) as applied in the rejection of claims 1,8,14,15 and 17,19,20,22 further in view of Georgescu et al. (US 2022/0076411 A1) with Related U.S. Application Data: Provisional application No. 62/854,130, filed on May 29, 2019 as applied in claims 9,21 further in view of Abrams et al. (US 2007/0184357 A1): PNG media_image24.png 811 677 media_image24.png Greyscale Re 10., Georgescu of the combination of CAO,Yang,Georgescu teaches The system of claim 9, wherein each (patch-)tile in the (cut) grid of (patch-)tiles comprises a halo to reduce (“edge” [0198]) artifacts at a boundary of the (patch-)tile, the halo being a region of neighboring (“2x2” [0036]) pixels surrounding the (patch-)tile, the halo having a size chosen (or “size” “layer” [0034] 2nd S) based on at least24 one of a number of convolution layers (“of larger dimensions” [0034] 2nd S) of the convolutional autoencoder (fig. 8: S23: “Reproduce tiles using autoencoder”) and25 a kernel size of the convolution layers of the convolutional autoencoder. Georgescu of the combination of CAO,Yang,Georgescu does not teach “a halo…the halo…surrounding…the halo having a size chosen”. Abrams teaches a halo (“region” [0168] 1st S: fig. 12: halo blocks)…the halo (region)…surrounding (via said halo region)…the halo having a size chosen (via “selected…halo…size” [0168] 2nd to last S: fig. 12: “d”). Since Georgescu of the combination of CAO,Yang,Georgescu teaches filtering edge artifacts, one of skill in artifacts can make Georgescu’s of the combination of CAO,Yang,Georgescu be as Abrams’ predictably recognizing the change “used in a photolithography process, produces a wafer pattern more faithful to the corresponding target pattern, the wafer pattern having fewer undesirable distortions and artifacts.”, Abrams [0055] last S. Conclusion The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure. The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action: Citation Relevance Kim et al. (US 10,986,356 B2) Kim teaches a decoder reproducing data according to an encoder threshold, via c. 23,ll. 5-15: “As another example, the size of the structurally reconstructed image 804 and the size of the downsampled image 808 may be determined according to an encoding quality that has been used more than a predetermined threshold value (e.g., an average quality of encoding qualities that have been used more than the predetermined threshold value may be used), based on the compression history information.” as the closest to the claimed “the decoded shape data reproduces the received set of shape data within a pre-determined threshold” of claim 1. MAO et al. (Discriminative Autoencoding Framework for Simple and Efficient Anomaly Detection) MAO teaches reconstructing an image of a car as a shape/pattern/distribution of an anomalous horse (fig. 8) with a predefined anomaly threshold via the abstract: “In the testing process, the trained framework can be used as an autoencoder to reconstruct test samples, where the trained discriminative encoder works as an encoder, and samples with reconstruction errors above a predefined threshold are determined as anomalies.” as the closest to the claimed “the decoded shape data reproduces the received set of shape data within a pre-determined threshold” of claim 1. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENNIS ROSARIO/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676 1 compress: to condense, shorten, or abbreviate (Dictionary.com) 2 code: a system used for brevity or secrecy of communication, in which arbitrarily chosen words, letters, or symbols are assigned definite meanings, wherein brevity is defined: shortness of time or duration; briefness, wherein brief is defined: using few words; concise; succinct, wherein succinct is defined: compressed into a small area, scope, or compass. (Dictionary.com) 3 set: a number, group, or combination of things of similar nature, design, or function. (Dictionary.com) 4 reproduce: to make a copy, representation, duplicate, or close imitation of (Dictionary.com) 5 decoder: Computers. a circuit designed to produce a single output when actuated by a certain combination of inputs. (Dictionary.com) 6 set: a number, group, or combination of things of similar nature, design, or function. (Dictionary.com) 7 match: a person or thing that is an exact counterpart of another, wherein counterpart is defined: a copy; duplicate. 8 reproduce: to produce again or anew by natural process. (Dictionary.com) 9 reconstruct: to construct again; rebuild; make over, wherein construct is defined: to build or form by putting together parts; frame; devise, wherein form is defined: to make or produce. (Dictionary.com) 10 set: A collection of distinct elements that have something in common. In mathematics, sets are commonly represented by enclosing the members of a set in curly braces, as {1, 2, 3, 4, 5}, the set of all positive integers from 1 to 5. (Dictioanry.com) 11 Regarding “comprises” in view of applicant’s disclosure: [0050] While the specification has been described in detail with respect to specific embodiments, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing, may readily conceive of alterations to, variations of, and equivalents to these embodiments. These and other modifications and variations to the present methods may be practiced by those of ordinary skill in the art, without departing from the scope of the present subject matter, which is more particularly set forth in the appended claims. Furthermore, those of ordinary skill in the art will appreciate that the foregoing description is by way of example only, and is not intended to be limiting. Steps can be added to, taken from or modified from the steps in this specification without deviating from the scope of the invention. In general, any flowcharts presented are only intended to indicate one possible sequence of basic operations to achieve a function, and many variations are possible. Thus, it is intended that the present subject matter covers such modifications and variations as come within the scope of the appended claims and their equivalents. 12 comprises: to include or contain, wherein include is defined: to contain as a subordinate element; involve as a factor (Dictionary.com) 13 35 USC 112 CHECK: keep: to hold or retain in one's possession; hold as one's own.(Dictionary.com): checks-out under 35 USC 112 14 has: hold for use, wherein hold is defined: to keep in a specified state, relation, etc.. (Dictionary.com) 15 based: the simple past tense and past participle (participating in the action of the claimed “are used to determine”) of base, where base (USED WITHOUT OBJECT) is defined: to have a basis; be based (usually followed by on orupon ), wherein based (USED WITH OBJECT) is defined: to make or form a base or foundation for, wherein base is defined: a fundamental principle or groundwork; foundation; basis, where basis is defined: a basic fact, amount, standard, etc., used in making computations, reaching conclusions, or the like. (Dictionary.com) 16 on: in connection, association, or cooperation with; as a part or element of. (Dictionary.com) 17 I see the phrase “based on” broad: thus this narrow interpretation of claim 1 under 35 USC 103 is just a sub-set of the broadest reasonable interpretation of claim 1/applicant’s specification. 18 THE CLAIMED INVENTION AS A WHOLE: regarding the claimed (“weight” or “weights”) “based on design rules” being broad in applicant’s specification and thus is spread over multiple portions of the specification: The problem in applicant’s disclosure is multi-faceted: insufficient image compression & any photomask defect: [0008] Image compression using standard methods of encoding and decoding the compressed image for integrated circuit data is insufficient for many reasons. The amount of data involved would take too much time and the data loss would be significant. An encoding that can completely replicate the original input exactly is lossless. An encoding that replicates the original input with some data loss is lossy. A typical JPEG compression algorithm uses a linear function to down sample an image by looking at pixels in the neighborhood and storing the resulting differences. The JPEG compression algorithm also has a quantization phase which uses an encoding tree such as Huffman coding. While JPEG compression can be lossless, it can take a long time to process the data in either direction. However, image compression using machine learning techniques can encode and decode compressed images efficiently enough to be useful, even if the compression is lossy. [0009] In the manufacture of integrated circuits using a photomask, manufacture of the photomask containing the original circuit design is a critical step of the process. The final photomask must be defect-free, within a pre-determined tolerance, since any defect on the photomask will be reproduced on all wafers manufactured using that photomask. Due to limitations of materials and processes, most or all newly-fabricated photomasks will have imperfections. In a process called mask inspection, a newly-fabricated photomask is analyzed to find imperfections. Each of these imperfections, or potential defects, is then further analyzed to determine if the imperfection is a real defect that will cause a defect on wafers manufactured with this photomask. Imperfections that are identified as real defects can be repaired in a subsequent process called mask repair to create a defect-free photomask suitable for manufacturing wafers. The multi-faceted solution has “efficiency”: [0025] The autoencoder 200 generates compressed data 208 through training, by comparing the decoded mask image 212 to the input 202 and calculating a loss value. The loss value is a cost function, which is an average of the losses from multiple data points. For example, a loss may be calculated for each data point, then the average of these losses corresponds to the cost (loss value). In some embodiments, batch gradient descent may be used where for one training cycle, "n" losses for "n" training instances is calculated, but only one cost is used in determining the parameter update. In some embodiments, stochastic gradient descent may be used, where the parameter update is calculated after each loss (and thus the loss effectively corresponds to the cost). The encoded compressed data 208 retains only information needed to reproduce the original input, within a pre-determined threshold, using decoder 210. For example, the autoencoder may set parameters to weight more important information, such that training allows the neural network to learn what information to keep based on those weights. Retaining only information that is needed to reproduce the original input can reduce calculation time and therefore improve processing efficiency. I don’t see in claim 1 “The encoded compressed data…retains only information needed to reproduce the original input”. This absence of the multi-faceted solution is an indication of obviousness. Claim 1’s “design rules”-“keep”-“weights” have no clear direction to currently amended claim 1’s last “reproduces” limitation as disclosed in applicant’s [0025] “retains only information needed to reproduce the original”. 19 (italics) represent claim limitations already taught 20 given: assigned as a basis of calculation, reasoning, etc.. Given A and B, C follows. (Dictionary.com) 21 rule: the customary or normal circumstance, occurrence, manner, practice, quality, etc.. (Dictionary.com) 22 rule…regulate: to adjust to some standard or requirement, as amount, degree, etc., wherein requirement is defined: that which is required (Dictionary.com) 23 dose: Physics. A) Also called absorbed dose. the quantity of ionizing radiation absorbed by a unit mass of matter, especially living tissue, measured in grays: although increasingly disfavored, in the U.S. an absorbed dose may still be measured in rads. B) exposure dose. wherein radiation is defined: Physics. A) the process in which energy is emitted as particles or waves. B) the complete process in which energy is emitted by one body, transmitted through an intervening medium or space, and absorbed by another body. C) the energy transferred by these processes, wherein grays is defined: the standard unit of absorbed dose of radiation (such as x-rays) in the International System of Units (SI), equal to the amount of ionizing radiation absorbed when the energy imparted to matter is 1 J/kg (one joule per kilogram). Gy (Dictionary.com) 24 Also, at the least . According to the lowest possible assessment (“one”), no less than. For example, At least a dozen more chairs are needed , or The job will take four hours at the least . [c. 1050] (Dictionay.com) 25 and: (used to connect alternatives: “a number of convolution layers of the convolutional autoencoder” or “a kernel size of the convolution layers of the convolutional autoencoder”) (Dictionary.com)
Read full office action

Prosecution Timeline

Oct 13, 2023
Application Filed
Oct 25, 2025
Non-Final Rejection — §103, §DP
Jan 08, 2026
Response Filed
Feb 20, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586184
METHODS AND APPARATUS FOR ANALYZING PATHOLOGY PATTERNS OF WHOLE-SLIDE IMAGES BASED ON GRAPH DEEP LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12585733
SYSTEMS AND METHODS OF SENSOR DATA FUSION
2y 5m to grant Granted Mar 24, 2026
Patent 12536786
IMAGE LOCALIZATION USING A DIGITAL TWIN REPRESENTATION OF AN ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12518519
PREDICTOR CREATION DEVICE AND PREDICTOR CREATION METHOD
2y 5m to grant Granted Jan 06, 2026
Patent 12518404
SYSTEMS AND METHODS FOR MACHINE LEARNING BASED PHYSIOLOGICAL MOTION MEASUREMENT
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
98%
With Interview (+28.6%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month