Prosecution Insights
Last updated: April 19, 2026
Application No. 18/043,552

IMAGE SEGMENTATION SYSTEM AND METHOD

Final Rejection §103
Filed
Feb 28, 2023
Examiner
ROBERTS, RACHEL L
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Nanyang Technological University
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
17 granted / 19 resolved
+27.5% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
65.1%
+25.1% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§103
DETAIL OFFICE ACTIONS The United States Patent & Trademark Office appreciates the response filed for the current application that is submitted on 09/03/2025. The United States Patent & Trademark Office reviewed the following documents submitted and has made the following comments below. Amendment Applicant submitted amendments on 09/03/2025. The Examiner acknowledges the amendment and has reviewed the claims accordingly. Priority Receipt is acknowledged that application claims priority to foreign application with application number SG10202008522X dated 09/02/2020. Information Disclosure Statement The IDS(s) dated 02/28/2023 that have been previously considered remain placed in the application file. Overview Claims 1-34 are pending in this application and have been considered below. Claims 1-17 are canceled by the applicant. Claims 18-34 are rejected. Applicant Arguments: In regards to the argument on Argument 1, Applicant/s state/s that claim 22 has been amended to reflect the objection made by examiner. In regards to Argument 2, Applicant states that “that Arienzo’ s invention is significantly and fundamentally different from the method of segmenting a volumetric image recited in claim 18” therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 6, paragraph 1). In regards to Argument 3, Applicant states that “is not obvious over Arienzo in view of Masood for at least the following reasons.”, therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 6, paragraph 1). In regards to Argument 4, Applicant states that “it can be understood by a person skilled in the art that Arienzo's invention is significantly and fundamentally different from the method of segmenting a volumetric image recited in claim 18.” therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 8, paragraph 1). In regards to Argument 5, Applicant states that “that Arienzo fails to disclose any method of segmenting a volumetric image using a DNN having a multi-task learning architecture” therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 8, paragraph 2). In regards to Argument 6, Applicant states that “Arienzo fundamentally fails to disclose or teach any multi-task learning architecture specifically comprising: (1) a segmentation DNN that is configured to output a segmentation of the target slice; and (2) a reconstruction DNN” therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 8, paragraph 3). In regards to Argument 7, Applicant states that “disagree that Arienzo discloses the claimed "reconstruction DNN" recited in claim 18. In particular, Applicants submit that such a paragraph of Arienzo fails to disclose any reconstruction DNN” therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 8, paragraph 4). In regards to Argument 8, Applicant states that “submit that the mere input of a post-image reconstruction retina data (e.g., 3D volumetric OCT image(s)) to the TSC shown in FIG. 5A has nothing to do with the claimed "reconstruction DNN" configured to receive a plurality of adjacent slices to the target slice and output a reconstruction of the target slice based on the plurality of adjacent slices." recited in claim 18” therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 9, paragraph 1). In regards to Argument 9, Applicant states that “it also follows that Arienzo fails to disclose the limitation "wherein the reconstruction DNN is further configured to share spatial information with the segmentation DNN" recited in claim 18.” therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 9, paragraph 2). In regards to Argument 10, Applicant states that “that Masood fails to cure the above-mentioned deficiencies in Arienzo to arrive at claim 18." therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 9, paragraph 4). In regards to Argument 11, Applicant states that “In particular, neither Arienzo nor Masood or a combination thereof, discloses a method of segmenting a volumetric image using a DNN having a multi-task learning architecture” therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 10, paragraph 2). In regards to Argument 12, Applicant states that “Applicant respectfully requests the withdrawal of all claim rejections and prompts allowance of the claims.” therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 10, paragraph 2). Examiner’s Responses: In response to Argument 1, Applicant’s arguments, see Remarks, filed 09/03/25, with respect to the objection, applicant has appeared not to have amended the claims, therefore the examiner is not persuaded. Therefore the Examiner will maintain the objection below. In response to Argument 2, the Examiner respectfully disagrees. Specifically, the applicant states that Arienzos invention is significantly and fundamentally different from the method of segmenting a volumetric image recited in claim 18. Although the cited reference is different from the invention disclosed by Applicant, the language of Applicant's claims is sufficiently broad to reasonably read on the cited reference. Arienzos teaches the method of extracting the features includes segmentation of the image into a full spatial image (¶0090), the image being segmented as part of a preprocessing of the image (¶0098). Additionally, Arienzos teaches the segmenting of the image is of the retina fundus and blood vessels (¶0118), further Arienzo qualifies the images as a 3d volumetric image (¶0130), and further specifies one or more 3D volumetric OCT images (¶0106). Additionally the rejection was made in combination with Masood and in response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In response to Argument 3, the Examiner respectfully disagrees. In regards to the applicants argument that the combination is not obvious over Arienzo in view of Masood. The Examiner made a proper determination of obviousness under 35 U.S.C. §103, and also provided an appropriate supporting rationale in view of the decision by the Supreme Court in KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007). The Examiner’s rational are based on the Office’s current understanding of the law, and are believed to be fully consistent with the binding precedent of the Supreme Court. Furthermore, the Examiner supported the rejection under 35 U.S.C. §103 via making the clear articulation of the reason(s) why the claimed invention would have been obvious by citing the specific areas in the prior art references. Further the Examiner, clearly stating the modification of the inventions, supported the rejection under 35 U.S.C. §103 by making the analysis explicit. Last, the Examiner did not make conclusory statements. The Court quoting In re Kahn, 441 F.3d 977, 988, 78 USPQ2d 1329, 1336 (Fed. Cir. 2006), stated that “‘[R]ejections on obviousness cannot be sustained by mere conclusory statements; instead, there must be some articulated reasoning with some rational underpinning to support the legal conclusion of obviousness.’” KSR, 550 U.S. at ___, 82 USPQ2d at 1396. Therefore, the Examiner has established a proper 35 U.S.C. §103 rejection with Arienzo in view of Masood, which is disclosed in detail below. In response to Argument 4, the Examiner respectfully disagrees. Specifically, the applicant states that Arienzos invention is significantly and fundamentally different from the method of segmenting a volumetric image recited in claim 18. Although the cited reference is different from the invention disclosed by Applicant, the language of Applicant's claims is sufficiently broad to reasonably read on the cited reference. Arienzos teaches the method of extracting the features includes segmentation of the image into a full spatial image (¶0090), the image being segmented as part of a preprocessing of the image (¶0098). Additionally, Arienzos teaches the segmenting of the image is of the retina fundus and blood vessels (¶0118), further Arienzo qualifies the images as a 3d volumetric image (¶0130), and further specifies one or more 3D volumetric OCT images (¶0106). Additionally the rejection was made in combination with Masood and in response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In regards to Argument 5, the Examiner respectfully disagrees. Arienzos teaches the method of extracting the features includes segmentation of the image into a full spatial image (¶0090), the image being segmented as part of a preprocessing of the image (¶0098). Additionally, Arienzos teaches the segmenting of the image is of the retina fundus and blood vessels (¶0118), further Arienzo qualifies the images as a 3d volumetric image (¶0130), and further specifies one or more 3D volumetric OCT images (¶0106). In the office action it is stated that Arienzo does not explicitly teach a segmentation DNN, however Masood does teach a segmentation DNN (Pg 8 04, Masood "After training the model it was used for the segmentation of the choroid layer."). In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In response to Argument 6, the Examiner respectfully disagrees. Arizeno teaches a multi task model ([¶0118, Arienzo "The inventors have recognized that by configuring a TSC as a multi-task model, the output of the TSC may be used to identify one or more locations of features of a person's retina fundus,") and post image reconstruction data (¶0106, Arienzo, "In some embodiments, image 530 may include post-image reconstruction retina data such as one or more 3D volumetric OCT images."). In the office action it is stated that Arienzo does not explicitly teach a segmentation DNN, however Masood does teach a segmentation DNN (Pg 8 04, Masood "After training the model it was used for the segmentation of the choroid layer."). Further, Masood teaches a segmentation DNN (Pg 8 04, Masood "After training the model it was used for the segmentation of the choroid layer.") that is configured to output a segmentation (Pg 7 104, Masood "The output of this step is the final segmented BM boundary." See also Pg 3 105-Page 7 106 Masood) of the target slice (Pg 8 05, Masood "our model processed each 2D OCТ image sequentially in slices"). In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In response to Argument 7, the Examiner respectfully disagrees. Arienzo teaches (10106, Arienzo, "In some embodiments, image 530 may include post-image reconstruction retina data such as one or more 3D volumetric OCT images."). Further in ¶0096 Arienzo teaches using a CNN to extract the data used in the reconstruction of the image. Our reviewing Court has made clear that examined claims are interpreted as broadly as is reasonable using ordinary and accustomed term meanings so as to be consistent with the Specification. In re Thrift, 298 F.3d 1357, 1364 (Fed. Cir. 2002). The Court further has explained that the interpretations are to be made while “taking into account whatever enlightenment by way of definitions or otherwise that may be afforded by the written description contained in the applicant’s specification,” In re Morris, 127 F.3d 1048, 1054 (Fed. Cir. 1997), but without reading limitations from examples given in the Specification into the claims, In re Zletz, 893 F.2d 319, 321-22 (Fed. Cir. 1989). Moreover, there is no ipsissimis verbis test for determining whether a reference discloses a claim element, i.e., identity of terminology is not required. In re Bond, 910 F.2d 831, 832 (Fed. Cir. 1990). Since a CNN is a specific type of DNN the examiner interprets that under broadest reasonable interpretation that Arienzo does discloses a reconstruction DNN and therefore the rejection is maintained below. In regards to Argument 8, the Examiner respectfully disagrees. Arienzo teaches (¶0106, Arienzo, "In some embodiments, image 530 may include post-image reconstruction retina data such as one or more 3D volumetric OCT images."). Further in ¶0096 Arienzo teaches using a CNN to extract the data used in the reconstruction of the image. Our reviewing Court has made clear that examined claims are interpreted as broadly as is reasonable using ordinary and accustomed term meanings so as to be consistent with the Specification. In re Thrift, 298 F.3d 1357, 1364 (Fed. Cir. 2002). The Court further has explained that the interpretations are to be made while “taking into account whatever enlightenment by way of definitions or otherwise that may be afforded by the written description contained in the applicant’s specification,” In re Morris, 127 F.3d 1048, 1054 (Fed. Cir. 1997), but without reading limitations from examples given in the Specification into the claims, In re Zletz, 893 F.2d 319, 321-22 (Fed. Cir. 1989). Moreover, there is no ipsissimis verbis test for determining whether a reference discloses a claim element, i.e., identity of terminology is not required. In re Bond, 910 F.2d 831, 832 (Fed. Cir. 1990). Since a CNN is a specific type of DNN the examiner interprets that under broadest reasonable interpretation that Arienzo does disclose a reconstruction DNN. Arienzo discloses a plurality of adjacent slices as under broadest reasonable interpretation neighboring is equivalent to adjacent (¶0106, Arienzo, "In some embodiments, the slices may be neighboring.") in ¶0106 Arienzo also discloses four or five respective adjacent slices which satisfies plurality. Arienzo discloses outputting a reconstruction of the target slice (¶0106, Arienzo, "In some embodiments, image 530 may include post-image reconstruction retina data such as one or more 3D volumetric OCT images.") based on the plurality of adjacent slices (¶0106, Arienzo, "multiple 2D images corresponding to slices of the person's retina fundus.") in ¶0106 Arienzo also discloses four or five respective adjacent slices which satisfies plurality. The opinion in In re Hiniker Co., 47 USPQ2d 1523 (Fed. Cir. 1998) stated "...the name of the game is the claim. See Giles Sutherland Rich, Extent of Protection and Interpretation of Claims--American Perspectives , 21 Int'l Rev. Indus. Prop.& Copyright L. 497, 499 (1990) (“The U.S. is strictly an examination country and the main purpose of the examination, to which every application is subjected, is to try to make sure that what each claim defines is patentable. To coin a phrase, the name of the game is the claim.”)." In regards to Argument 9, the Examiner respectfully disagrees. Arienzo discloses a reconstruction DNN in ¶0096. Arienzo teaches using a CNN to extract the data used in the reconstruction of the image. Since a CNN is a specific type of DNN the examiner interprets that under broadest reasonable interpretation that Arienzo does discloses a reconstruction DNN. Arienzo also discloses that during segmentation of the feature a full spatial map and relative position and orientation of the features is included (¶0090). The examiner agrees that Areinzo does not teach the segmentation CNN. However Masood does disclose a segmentation model (Pg 8 04, Masood "After training the model it was used for the segmentation of the choroid layer."). In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In regards to Argument 10, the Examiner respectfully disagrees. Masood discloses a segmentation DNN (Pg 8 04, Masood "After training the model it was used for the segmentation of the choroid layer.") the examiner interprets under broadest reasonable interpretation that a model is equivalent to a DNN as on Pg 8 ¶04-05 in Masood it discloses that the model is a trained CNN, and a CNN is a specific type of DNN, therefore it is equivalent. Masood also discloses that the DNN is configured to output a segmentation (Pg 7 104, Masood "The output of this step is the final segmented BM boundary." See also Pg 3 Page 7 Masood) of the target slice (Pg 8 05, Masood "our model processed each 2D OCТ image sequentially in slices") and the spatial information being indicative of correlations between the adjacent slices and the target slice (Pg, 4, 105 Masood "For example, the spatial map may include a binary mask indicative of whether features such as branch endings 410 or bifurcations 420 are present at any particular location in the map. In some embodiments, a relative angle indicating locations of the features may be calculated based on the spatial map." Pg 2 ¶02 Masood, "In OCT imaging, some approaches find the correlation among the measurable and morphological topographies of retinal thickness maps. Such normal standards for the thickness maps can help physicians in comparing different patients' choroidal thickness maps with normal sets. Consequently, the automatic segmentation of the choroidal boundary has garnered the attention of many researchers worldwide.". Additionally Figure 5 of Masood discloses the segmentation steps of the target slice. Therefore the Examiner finds that Masood does cure the deficiencies in Arienzo. Therefore the Examiner will maintain the objection below. In regards to Argument 11, the Examiner respectfully disagrees. Arienzo and Masood in combination do teach a method of segmenting a volumetric image using a DNN having a multi-task learning architecture. Arienzo teaches a method of segmenting (¶0090, Arienzo, "solving segmentation of the image(s)" Fig 3 discloses a method performed by a device, ¶0088 discloses a method with imaging steps) a volumetric image (¶0106, Arienzo, "multiple 2D images corresponding to slices of the person's retina fundus.", ¶0098, ¶0118 discloses the image being segmented, ¶) using a DNN (Pg 8 04, Masood "After training the model it was used for the segmentation of the choroid layer." the examiner interprets under broadest reasonable interpretation that a model is equivalent to a DNN as on Pg 8 ¶04-05 in Masood it discloses that the model is a trained CNN, and a CNN is a specific type of DNN, therefore it is equivalent) having a multi-task learning architecture (¶0118, Arienzo "The inventors have recognized that by configuring a TSC as a multi-task model, the output of the TSC may be used to identify one or more locations of features of a person's retina fundus," therefore the examiner interprets that multi task model, under broadest reasonable interpretation is equivalent to multi-task learning architecture). Additionally, the limitations of the claims were identified and correlated with the references as indicated above and in the first office action on the merits. Applicant has merely made the allegation that the limitations are not met, and thus has not provided any evidence or argument directed to how the identified elements in the first action fail to meet the claimed limitations or to how the identified elements are otherwise distinguishable from the claimed limitations as is required by 37 CFR §1.111(b). Therefore the Examiner will maintain the objection below. In regards to Argument 12, the Examiner respectfully disagrees. As stated above the references in combination teach claim 18 and all of its dependent claims. Therefore the Examiner will maintain the objection below. Claim Objections Claim 22 is objected to because of the following informalities: “the output” Claim 22 Lines 5-6 is not mentioned in the parent claim which could lead to antecedent bias, it is suggested “the output” in Claim 22 Lines 5-6 should be changed to “an output”. Appropriate correction is required. Claim Interpretation Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009). Claim 21 recite “and/or” then listing “applying a dimension reduction mechanism to the first feature data and/or to the first reduced-dimension feature data”. Since “and/or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Claim 22 recites “and/or” then listing “the first reduced-dimension feature data to a 3D convolution layer; applying an aggregation of features between adjacent slices; applying batch normalization to the output of the 3D convolution layer; and applying a ReLU activation function to the output of the batch normalization.”. Since “and/or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 18 and 29-34 are rejected under 35 U.S.C. 103 as being unpatentable over Arienzo et al (US Patent Publication 20200193592 A1, hereafter referred to as Arienzo) in view of Masood et al (Masood, S., Fang, R., Li, P. et al. Automatic Choroid Layer Segmentation from Optical Coherence Tomography Images Using Deep Learning. Sci Rep 9, 3058 (2019), hereafter referred to as Masood). Regarding Claim 18, Arienzo teaches a method of segmenting (¶0090, Arienzo, "solving segmentation of the image(s)") a volumetric image (¶0106, Arienzo, "one or more 3D volumetric OCT images.") comprising a plurality of slices (¶0106, Arienzo, “multiple 2D images corresponding to slices of the person's retina fundus."), the method comprising: inputting a target slice of the volumetric image to a deep neural network (DNN) (¶0106, Arienzo "The CNN-LSTM neural network of FIGS. SA and SC may receive the series of images as inputs") having a multi-task learning architecture (¶0118, Arienzo "The inventors have recognized that by configuring a TSC as a multi-task model, the output of the TSC may be used to identify one or more locations of features of a person's retina fundus,"), the multi-task learning architecture comprising: a reconstruction DNN (¶0106, Arienzo, "In some embodiments, image 530 may include post-image reconstruction retina data such as one or more 3D volumetric OCT images.") that is configured to: receive a plurality of adjacent slices to the target slice (¶0106, Arienzo, "In some embodiments, the slices may be neighboring."); and output a reconstruction of the target slice (¶0106, Arienzo, "In some embodiments, image 530 may include post-image reconstruction retina data such as one or more 3D volumetric OCT images.") based on the plurality of adjacent slices (¶0106, Arienzo, “multiple 2D images corresponding to slices of the person's retina fundus."); wherein the reconstruction DNN is further configured to share spatial information with the segmentation DNN (¶0090, Arienzo, "a full spatial map including relative positions and orientations of the individual features."). Arienzo does not explicitly teach a segmentation DNN that is configured to output a segmentation of the target slice, and the spatial information being indicative of correlations between the adjacent slices and the target slice. Masood is in the same field of image segmentation of medical images. Further, Masood teaches a segmentation DNN (Pg 8 ¶04, Masood "After training the model it was used for the segmentation of the choroid layer.") that is configured to output a segmentation (Pg 7 ¶04, Masood "The output of this step is the final segmented BM boundary." See also Pg 3 ¶05-¶06 Masood) of the target slice (Pg 8 ¶05, Masood "our model processed each 2D OCT image sequentially in slices") and the spatial information being indicative of correlations between the adjacent slices and the target slice (Pg, 4, ¶05 Masood "For example, the spatial map may include a binary mask indicative of whether features such as branch endings 410 or bifurcations 420 are present at any particular location in the map. In some embodiments, a relative angle indicating locations of the features may be calculated based on the spatial map.” Pg 2 ¶02 Masood, “In OCT imaging, some approaches find the correlation among the measurable and morphological topographies of retinal thickness maps. Such normal standards for the thickness maps can help physicians in comparing different patients’ choroidal thickness maps with normal sets. Consequently, the automatic segmentation of the choroidal boundary has garnered the attention of many researchers worldwide.” Also see Figure 5 from Masood below). PNG media_image1.png 207 760 media_image1.png Greyscale Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Arienzo by incorporating a segmentation DNN that outputs a segmentation of the target slice that is taught by Masood, to make an invention that can reconstruct a 3D image of the slice based on different features identified by the segmentation; thus, one of ordinary skilled in the art would be motivated to combine the references since there is a need for increased accuracy in identifying the changes in the shape and anatomical structure of the choroid due to this having been acknowledged in primary macular degeneration and in other advanced diseases. Quantitative and qualitative analysis of BM and the choroid layer can help in understanding the relationship between these various retinal diseases. (Masood, Pg 1 ¶2). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 29, Arienzo in view of Masood discloses a method according to claim 18, wherein the volumetric image is a 3D medical image (Pg 3 ¶03, Masood, "have been proposed and implemented to detect the intra-retinal layers from 2D or 3D OCT images."). See rationale for Claim 18 (its parent claim). Regarding Claim 30, Arienzo in view of Masood discloses a method according to claim 29, wherein the 3D medical image is a 3D optical coherence tomography (OCT) image (Pg 4 ¶02, Masood, "The method made use of CNN to perform the retinal layer segmentation in OCT images."). See rationale for Claim 18 (its parent claim). Regarding Claim 31, Arienzo in view of Masood discloses a method according to claim 30, wherein the 3D OCT image is a retinal image, and wherein the target slice corresponds to a layer of the choroid (Figure 2, (b), Masood, "Contains OCT image manually segmented by the experts, the image contains segmented BM and Choroid layer where BM is marked in green and choroid layer is marked in red."). See rationale for Claim 18 (its parent claim). PNG media_image2.png 244 644 media_image2.png Greyscale Regarding Claim 32, Arienzo in view of Masood discloses a method according to claim 31, wherein the method is repeated (Pg 3 ¶03, Masood " continuous max-flow method to segment out different retinal layers.") for a plurality of target slices (¶0106, Arienzo, “multiple 2D images corresponding to slices of the person's retina fundus."); and wherein the method further comprises generating a choroidal thickness map from segmentation (Pg 4, ¶03, Masood, "The desired layers included BM and the choroid layer. BM was segmented out using a series of morphological operations followed by the use of CNNs for choroid segmentation. Then, a thickness map was generated based on the extracted layers.") of the plurality of target slices (Pg 12 ¶05, Masood, "batches of a size of 50 slices"). See rationale for Claim 18 (its parent claim). Regarding Claim 33, Arienzo in view of Masood discloses a system (Pg 12 ¶05, Masood, "Our algorithm was implemented in MATLAB R2016b") for segmentation of a volumetric image (Pg 2 ¶08, Masood, "A two-stage segmentation approach with emphasis on segmentation accuracy and consistency") comprising a plurality of slices (Pg 12 ¶05, Masood, "batches of a size of 50 slices"), comprising: at least one processor (Pg 12 ¶05, Masood, "with Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20 GHz (10 cores) processor,"); and computer-readable storage (Pg 12 ¶05, Masood, "64 GBof RAM space and an operating system of 64 bits.") having stored thereon instructions (Pg 12 ¶05, Masood, "run on a server with a GPU of 12 with 2 GB memory,") for causing the at least one processor (Pg 12 ¶05, Masood, "with Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20 GHz (10 cores) processor,") to carry out a method (Pg 12 ¶05, Masood, "Our algorithm was implemented in MATLAB R2016b") according to claim 18. See rationale for Claim 18 (its parent claim). Regarding Claim 34, Arienzo in view of Masood discloses non-transitory computer-readable storage having instructions stored thereon for causing at least one processor to carry out a method according to claim 18. (Pg 12 ¶05, Masood "Our algorithm was implemented in MATLAB R2016b and run on a server with a GPU of 12 with 2 GB memory, with Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20 GHz (10 cores) processor, 64 GB of RAM space and an operating system of 64 bits."). See rationale for Claim 18 (its parent claim). Claims 19 -28 are rejected under 35 U.S.C. 103 as being unpatentable over Arienzo in view of Masood further in view of Huang et al (US Patent Publication 20200065969 A1, hereafter referred to as Huang). Regarding Claim 19 Arienzo in view of Masood teaches a method according to claim 18, wherein the reconstruction DNN (Pg 6 ¶04, Masood, "a reconstruction approach based on the morphological opening was applied to remove the noise and preserve the shape of BM layer") comprises a convolutional feature extractor (¶0095, Arienzo "convolutional neural network (FCNN). FIG. 7 illustrates an FCNN configured to identify boundaries of features in an image of a person's retina fundus.") for generating first feature data from the adjacent slices (¶0092, Arienzo "The inventors have recognized that implementing TSCs for extracting feature data from captured images facilitates identification using multiple captured images."), for generating first reduced-dimension feature data from the first feature data at one or more scales (¶0090, Arienzo “To conserve storage space and/or simplify computing of the spatial map, thickness of some features such as veins may be reduced to a single pixel width."). Arienzo in view of Masood does not explicitly disclose a reconstruction downsampler. Huang is in the same field of image segmentation and processing of medical images. Further, Huang teaches and a reconstruction downsampler (¶0048, Huang "convolutional layers are arranged with downsampling"). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Arienzo in view of Masood by incorporating a reconstruction downsampler and upsampler that is taught by Huang, to make an invention that can reconstruct a 3D image of the slice using identified features that have been scaled to a desired size; thus, one of ordinary skilled in the art would be motivated to combine the references since there is a need for improved segmentation from raw data (Huang, ¶0004). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 20 Arienzo in view of Masood in view of Huang teaches a method according to claim 19, wherein the reconstruction DNN (Pg 6 ¶04, Masood, "a reconstruction approach based on the morphological opening was applied to remove the noise and preserve the shape of BM layer") comprises a reconstruction upsampler (¶0053, Huang "upsampling layers with convolution at each,") for transforming the first reduced-dimension feature data to first upsampled data having the same dimensions as the first feature data (¶0091 Arienzo, "In some embodiments, the feature data may include relative positions and orientations of translationally and rotationally invariant features to facilitate a Scale Invariant Feature Transform (SIFT)"). See motivation for claim 19 (its parent claim). Regarding Claim 21 Arienzo in view of Masood in view of Huang teaches a method according to claim 19, wherein the reconstruction DNN (Pg 6 ¶04, Masood, "a reconstruction approach based on the morphological opening was applied to remove the noise and preserve the shape of BM layer") comprises one or more dimension reduction layers (¶0090, Arienzo "To conserve storage space and/or simplify computing of the spatial map, thickness of some features such as veins may be reduced to a single pixel width.") for applying a dimension reduction mechanism to the first feature data (¶0090, Arienzo "image(s) into a full spatial map including relative positions and orientations of the individual features.") and/or to the first reduced-dimension feature data.(¶0090, Arienzo "Rescale the images to have the same radius ( e.g., 300 pixels)"). Regarding Claim 22, Arienzo in view of Masood in view of Huang teaches a method according to claim 21, wherein the dimension reduction mechanism (¶0091 Arienzo, "In some embodiments, the feature data may include relative positions and orientations of translationally and rotationally invariant features to facilitate a Scale Invariant Feature Transform (SIFT)") comprises: inputting the first feature data and/or the first reduced-dimension feature data (¶0090, Arienzo "To conserve storage space and/or simplify computing of the spatial map, thickness of some features such as veins may be reduced to a single pixel width.") to a 3D convolution layer (¶0180 Arienzo, "CNN 500a may include a fully-3D processing pipeline"); applying an aggregation of features between adjacent slices (¶0146, Arienzo "particular retina fundus feature or group of features, for example including data associated with multiple images indicating the particular feature(s) in the multiple images."); applying batch normalization ( to the output of the 3D convolution layer (Pg 8, ¶04, Masood, "There was also a linear classifier for contrast normalization on top of all these layers."); and applying a ReLU activation function (Pg 8, ¶05, Masood, "The convolutional layers were followed by max pooling and Rectified Linear Units (ReLU).") to the output of the batch normalization. (Pg 8, ¶04, "There was also a linear classifier for contrast normalization on top of all these layers."). See rationale for claim 19 (its parent claim). Regarding Claim 23 Arienzo in view of Masood in view of Huang teaches a method according to claim 21, wherein layers of the reconstruction downsampler are connected to layers of the reconstruction upsampler (¶0048, Huang, "FIG. 3B shows an example where convolutional layers are arranged with downsampling and upsampling into an encoder-decoder pair connected at a bottleneck") via respective ones of the dimension reduction layers by concatenation (¶0048, Huang, "Skip connections at the same resolution are shown."). See rationale for claim 19 (its parent claim). PNG media_image3.png 215 409 media_image3.png Greyscale Regarding Claim 24, Arienzo in view of Masood teaches a method according to claim 18, wherein the segmentation DNN comprises (Pg 8 ¶04 "After training the model it was used for the segmentation of the choroid layer.") reduced-dimension feature data (¶0090, Arienzo" To conserve storage space and/or simplify computing of the spatial map, thickness of some features such as veins may be reduced to a single pixel width.") from the second feature data at one or more scales (¶0090, Arienzo "Rescale the images to have the same radius ( e.g., 300 pixels)"). Arienzo in view of Masood does not explicitly teach a convolutional feature extractor for generating second feature data from the target slice and a segmentation downsampler for generating second feature data. Huang is in the same field of image segmentation and processing of medical images. Further, Huang teaches a convolutional feature extractor (¶0024, Huang "utilizes intermediate segmentation estimation to facilitate image-domain feature extraction from the raw data,") for generating second feature data from the target slice (¶0062. Huang "Each of the four segmentation probability maps 28, 29 are element-wise (i.e., location-by location) multiplied with the input image features x,_ 1ER2x wxh to generate new features"), and a segmentation downsampler (¶0048, Huang "convolutional layers are arranged with downsampling") for generating second feature data (¶0062. Huang "to generate new features"). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Arienzo in view of Masood by incorporating a reconstruction downsampler and upsampler and a feature extractor that generates second feature data as taught by Huang, to make an invention that can reconstruct a 3D image of the slice using multiple identified features that have been scaled to a desired size; thus, one of ordinary skilled in the art would be motivated to combine the references since there is a need for improved segmentation from raw data (Huang, ¶0004). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 25 Arienzo in view of Masood in view of Huang teaches a method according to claim 24, wherein the segmentation DNN (Pg 8 ¶04 "After training the model it was used for the segmentation of the choroid layer.") comprises a segmentation upsampler (¶0053, Huang "upsampling layers with convolution at each,") for transforming (¶0091, Arienzo "feature data may include relative positions and orientations of translationally and rotationally invariant features to facilitate a Scale Invariant Feature Transform (SIFT)") the second reduced-dimension feature data (¶0062. Huang " to generate new features") to second upsampled data having the same dimensions as the second feature data (¶0090, Arienzo "Rescale the images to have the same radius ( e.g., 300 pixels)"). See rationale for claim 24 (it’s parent claim). Regarding Claim 26 Arienzo in view of Masood in view of Huang teaches a method according to claim 24, wherein layers of the segmentation downsampler are connected to layers of the segmentation upsampler (¶0048, Huang, "FIG. 3B shows an example where convolutional layers are arranged with downsampling and upsampling into an encoder-decoder pair connected at a bottleneck"). See rationale for claim 24 (it’s parent claim). Regarding Claim 27 Arienzo in view of Masood in view of Huang teaches a method according to claim 24, wherein the reconstruction DNN (Pg 6 ¶04, Masood, "a reconstruction approach based on the morphological opening was applied to remove the noise and preserve the shape of BM layer") is configured to share spatial information (Pg, 4, ¶05 "For example, the spatial map may include a binary mask indicative of whether features such as branch endings 410 or bifurcations 420 are present at any particular location in the map. In some embodiments, a relative angle indicating locations of the features may be calculated based on the spatial map.) with the segmentation DNN by element-wise addition (¶0062. Huang "Each of the four segmentation probability maps 28, 29 are element-wise (i.e., location-bylocation) multiplied with the input image features x,_ 1ER2x wxh to generate new features") of output of layers of the reconstruction upsampler to output of layers of the segmentation upsampler (¶0048, Huang "Skip connections at the same resolution are shown. In alternative embodiments, skip connections are not used, other numbers of layers are provided, and/or other numbers of kernels are used."). See rationale for claim 24 (it’s parent claim). Regarding Claim 28 Arienzo in view of Masood teaches a method according to claim 18. Arienzo in view of Masood does not explicitly teach wherein the loss function of the segmentation DNN is the 2D Intersection over Union (IoU) loss function. Huang is in the same field of image segmentation and processing of medical images. Further, Huang teaches wherein the loss function of the segmentation DNN (¶0013, Huang "The training is with a segmentation loss and not a reconstruction loss") is the 2D Intersection over Union (IoU) loss function (¶0084, Huang "the machine-learned network was trained with a segmentation loss"). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Arienzo in view of Masood by incorporating a loss function that is taught by Huang, to make an invention that can reconstruct a 3D image of the slice using a network that is trained based on a loss function; thus, one of ordinary skilled in the art would be motivated to combine the references since there is a need for improved segmentation from raw data (Huang, ¶0004). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Conclusion 36. THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). 37. A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 38. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL LYNN ROBERTS whose telephone number is (571)272-6413. The examiner can normally be reached Monday- Friday 7:30am- 5:00pm. 32. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. 33. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached on 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. 34. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RACHEL L ROBERTS/Examiner, Art Unit 2674 /ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Feb 28, 2023
Application Filed
May 30, 2025
Non-Final Rejection — §103
Sep 03, 2025
Response Filed
Oct 30, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581132
LARGE-SCALE POINT CLOUD-ORIENTED TWO-DIMENSIONAL REGULARIZED PLANAR PROJECTION AND ENCODING AND DECODING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12569208
PET APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12564324
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING SYSTEM FOR ABNORMALITY DETECTION
2y 5m to grant Granted Mar 03, 2026
Patent 12561773
METHOD AND APPARATUS FOR PROCESSING IMAGE, ELECTRONIC DEVICE, CHIP AND MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12525028
CONTACT OBJECT DETECTION APPARATUS AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+14.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month