Prosecution Insights
Last updated: April 19, 2026
Application No. 18/546,811

METHOD AND APPARATUS OF BOUNDARY REFINEMENT FOR INSTANCE SEGMENTATION

Final Rejection §102§103
Filed
Aug 17, 2023
Examiner
RHIM, WOO CHUL
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Tsinghua University
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
112 granted / 140 resolved
+18.0% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
28 currently pending
Career history
168
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 140 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/09/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to amendments Submission dated 01/23/2026 amends claims 15, 19, 21, and 24-27 and adds claims 28-29. Claims 1-14 were previously cancelled. Claims 15-29 are pending. Response to arguments Applicant's arguments in the submission dated 01/23/2026 have been fully considered but they are not persuasive. On page 6 of the submission, the applicant argues that Price as applied does not teach generating a respective refined mask patch because 1) Price’s image strips is sampled from the digital image and belong to the image instead of a mask and 2) Price’s boundary data is coordinates/locations of the boundary and not a refined patch-to-patch mask. The examiner disagrees. With respect to argument 1), the examiner did not rely on the image strips as the claimed refined mask. As previously presented in the previous office action and below in the updated 102 rejection, the image strips are being relied on to disclose the claimed image patches. With respect to argument 2), the examiner directs the applicant’s attention to the boundary data, e.g., 316 in FIG. 3B, which is a collection of refined masks predicted from the corresponding image strips and not just coordinates/locations of a boundary. For the aforementioned reasons, the examiner finds the arguments 1) and 2) not persuasive. On pages 6 and 7 of the submission, the applicant argues generally that Price’s teaching results in information loss and the claimed invention provides a robust boundary refinement that achieves better overall accuracy than Price. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. As such, the examiner finds the arguments not persuasive. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 15, 16, 19-21, 23,24, 26, and 27 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Us patent application publication no. 2021/0295525 to Price et al. (hereinafter Price). For claim 15, Price as applied discloses a method for instance segmentation, comprising the following steps: receiving an image and an instance mask identifying an instance in the image (see, e.g., pars. 44-45 and FIG. 1, which teach receiving a digital image depicting objects and a mask describing the objects depicted in the image); extracting a set of image patches from the image based on a boundary of the instance mask (see, e.g., pars. 52-55 and FIGS. 2 and 3A, which teach sampling the image along the curve that is mapped to the contour of the object in the mask, wherein the sampling including extracting strips of pixels that are withing a defined threshold of the points along the curve); generating a respective refined mask patch for each of the set of image patches based on at least a part of the instance mask corresponding to the each of the set of image patches (see, e.g., pars. 58-60 and FIGS. 2 and 3B, which teach generating the boundary data from the strip image, i.e., the collection of extracted pixel strips, wherein the boundary data includes the boundary coordinates/locations and representation; the examiner interprets the boundary data as the claimed refined mask patches because it represents a collection of respective mask strips for the collection of the pixel strips); and refining the boundary of the instance mask based on the respective refined mask patch for each of the set of image patches (see, e.g., pars. 60-61 and FIGS. 2 and 3C, which teach generating, from the boundary data, a strip recovery that represents a refined boundary of the mask). For claim 26, Price as applied discloses an apparatus for instance segmentation, comprising: a memory (see, e.g., pars. 102-105 and FIG. 8); and at least one processor coupled to the memory and configured for instance segmentation (see, e.g., pars. 102-104 and FIG. 8), the at least one processor configured to: receive an image and an instance mask identifying an instance in the image (44-45and FIGS. 1, which teach receiving a digital image depicting objects and a mask describing the objects depicted in the image); extract a set of image patches from the image based on a boundary of the instance mask (see, e.g., pars. 52-55and FIGS. 2 and 3A, which teach sampling the image along the curve that is mapped to the contour of the object in the mask, wherein the sampling including extracting strips of pixels that are withing a defined threshold of the points along the curve), generate a respective refined mask patch for each of the set of image patches based on at least a part of the instance mask corresponding to the each of the set of image patches (see, e.g., pars. 58-60 and FIGS. 2 and 3B, which teach generating the boundary data from the strip image, i.e., the collection of extracted pixel strips, wherein the boundary data includes the boundary coordinates/locations and representation; the examiner interprets the boundary data as the claimed refined mask patches because it represents a collection of respective mask strips for the collection of the pixel strips), and refine the boundary of the instance mask based on the respective refined mask patch for each of the set of image patches (see, e.g., pars. 60-61 and FIGS. 2 and 3C, which teach generating, from the boundary data, a strip recovery that represents a refined boundary of the mask). For claim 27, Price as applied discloses a non-transitory computer readable medium (pars. 100-109 and FIG. 8) on which is stored computer code for instance segmentation, the computer code when executed by a processor (see, e.g., pars. 99-102 and FIG. 8), causing the processor to perform the following steps: receiving an image and an instance mask identifying an instance in the image (see, e.g., pars. 44-45 and FIG. 1, which teach receiving a digital image depicting objects and a mask describing the objects depicted in the image); extracting a set of image patches from the image based on a boundary of the instance mask (see, e.g., pars. 52-55 and FIGS. 2 and 3A, which teach sampling the image along the curve that is mapped to the contour of the object in the mask, wherein the sampling including extracting strips of pixels that are withing a defined threshold of the points along the curve); generating a respective refined mask patch for each of the set of image patches based on at least a part of the instance mask corresponding to the each of the set of image patches (see, e.g., pars. 58-60 and FIGS. 2 and 3B, which teach generating the boundary data from the strip image, i.e., the collection of extracted pixel strips, wherein the boundary data includes the boundary coordinates/locations and representation; the examiner interprets the boundary data as the claimed refined mask patches because it represents a collection of respective mask strips for the collection of the pixel strips); and refining the boundary of the instance mask based on the respective refined mask patch for each of the set of image patches (see, e.g., pars. 60-61 and FIGS. 2 and 3C, which teach generating, from the boundary data, a strip recovery that represents a refined boundary of the mask). For claim 16, Price as applied discloses that a center of an image patch in the set of image covers the boundary of the instance mask (see, e.g., pars. 52-54 and FIGS. 2 and 3A, which teach sampling the 5image along the curve that is mapped to the contour of the object in the mask, wherein the sampling including extracting strips of pixels that are withing a defined threshold of the points along the curve; the examiner interprets the strips to be centered on the points along the curve). For claim 19, Price as applied teaches: extracting a set of mask patches from the instance mask based on the boundary of the instance mask (see, e.g., pars. 51-52, which teach receiving curve data describing the upsampled curve; the examiner interprets the received curve data as the claimed set of mask patches), each of the set of mask patches covering a corresponding image patch of the set of image patches (see, e.g., pars. 51-52, which teach receiving curve data describing the upsampled curve; the examiner interprets each point in the received curve data as the claimed mask patch because each point centers a corresponding strip of pixels); wherein the generating of the respective refined mask patch for each of the set of image patches is based on a corresponding mask patch of the set of mask patches (see, e.g., pars. 58-60 and FIGS. 2 and 3B, which teach generating boundary data for the strip image, i.e., the collection of extracted pixel strips, wherein the boundary data is based on the curve data). For claim 20, Price as applied discloses that each of the set of mask patches provides context information for a corresponding image patch, the context information indicating location and semantic information of the instance in the corresponding image patch (see, e.g., pars. 28, 30, 34, and 60, which teach that curve data provides the pixels to extracted along the curve, wherein the extracted pixels have corresponding locations and semantic contents of the image). For claim 21, Price as applied discloses: performing binary segmentation on each of the set of image patches through a semantic segmentation network (see, e.g., par. 59 and FIGS. 2 and 3B, which teach generating the boundary data, which is in black and white and segmenting the object from the background). For claim 23, Price as applied discloses that each of the set of image patches is resized to match an input size of the semantic segmentation network (see, e.g., pars. 57-58 and FIGS. 2 and 3B, which teach resizing the sampled pixels to generate a strip image, which is inputted into the segmentation network). For claim 24, Price as applied discloses that the generating of the refined mask patch for each of the set of image patches is further based on at least a part of a second instance mask identifying a second instance adjacent to the instance in the image (see, e.g., par. 62, which teach predicting boundaries for objects with inner and out contours, such as a donut shaped object). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Price in view of us patent application publication no. 2013/0121577 to Wang et al. For claim 17, while Price as applied teaches obtaining image patches along the contour of the mask, it does not explicitly teach obtaining a plurality of image patches from the image by sliding a window along the boundary of the instance mask; and filtering out the set of image patches from the plurality of images patches based on an overlapping threshold. Wang in the analogous art teaches obtaining a plurality of image patches from the image by sliding a window along the boundary (see, e.g., pars. 79-86 and FIG. 12 of Wang, which teach obtaining image patches by sliding windows along the contour); and filtering out the set of image patches from the plurality of images patches based on an overlapping threshold (see, e.g., par. 86 and FIG. 12 of Wang, which teach centering each window at equally spaced sample points along the contour and also setting a number of windows such that each point on the contour is covered by at least two windows). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Price to use sliding windows as taught by Wang because doing would yield a predictable results of having a consistent sampling along the contour (see MPEP 2143(I)(D)). Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Price in view of Wang and further in view of us patent application publication no. 2021/0319327 to Poirier et al. (hereinafter Poirier). For claim 18, while Price in view of Wang does not explicitly teach, Poirier in the analogous art teaches that the filtering out the set of image patches is based on a non-maximum suppression (NMS) algorithm, and the overlapping threshold is an NMS eliminating threshold (see, e.g., par. 35 of Poirier, which teach filtering out windows by applying NMR algorithm). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Price in view of Wang to use NMS for filtering out the windows as taught by Poirier because doing would yield a predictable results of filtering out multiple windows with the same image pixels (see par. 35 of Poirier and MPEP 2143(I)(D)). Claim(s) 28-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Price in view of us patent application publication no. 2017/0287137 to Lin et al. (hereinafter Lin). For claim 28, while Price as applied discloses refining of the boundary of the instance mask (see, e.g., pars. 60-61 and FIG. 3C, which teach generating the strip recovery), it does not explicitly teach reassembling the respective refined mask patches into the instance mask. Lin in the analogous art teaches reassembling pixels in the segmentation mask by correcting/replacing false pixel identifications of the segmentation mask around an edge of the object over multiple iterations (see, e.g., pars. 61-70 and FIG. 3 of Lin). It would have been obvious to one of ordinary skill in the art to modify Price to reassemble/reintegrate the refined patches into the instance mask as Lin reassembles/reincorporates the correctly identified pixels to its segmentation mask because doing so would allow instance mask to fit precisely to the edges of the object (see e.g., pars. 61 and 63 of Lin). For claim 29, while Price does not explicitly teach, Lin the analogous art teaches reassembling the respective refined mask patches into the instance mask by replacing a previous prediction for each pixel in the patches while pixels without refinement remain unchanged (see, e.g., pars. 61-70 and FIG. 3 of Lin teaches reassembling pixels in the segmentation mask by correcting/replacing false pixel identifications of the segmentation mask around an edge of the object over multiple iterations; the examiner interprets the correctly identified pixels as those pixels without refinement and remain unchanged). It would have been obvious to one of ordinary skill in the art to modify Price to reassemble/reintegrate the refined patches into the instance mask as Lin reassembles/reincorporates the correctly identified pixels to its segmentation mask because doing so would allow instance mask to fit precisely to the edges of the object (see e.g., pars. 61 and 63 of Lin). Allowable Subject Matter Claims 22 and 25 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. In regard to claim 22, when considered as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “the semantic segmentation network has one or more channels for an image patch, one channel for a mask patch, and 2 classes of output.” In regard to claim 25, when considered as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “the refining of the boundary of the instance mask includes: averaging values of overlapping pixels in the refined mask patches for adjacent image patches in the set of image patches; and determining whether a corresponding pixel in the instance mask identifies the instance based on a comparison between the averaged values and a threshold.” Additional Citations The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action. Citation Relevance Hou et al. (2021/0158043) Describes systems and methods for panoptic image segmentation. One embodiment performs semantic segmentation and object detection on an input image, wherein the object detection generates a plurality of bounding boxes associated with an object in the input image; selects a query bounding box from among the plurality of bounding boxes; maps at least one of the bounding boxes in the plurality of bounding boxes other than the query bounding box to the query bounding box based on similarity between the at least one of the bounding boxes and the query bounding box to generate a mask assignment for the object, the mask assignment defining a contour of the object; compares the mask assignment with results of the semantic segmentation to produce a refined mask assignment for the object; and outputs a panoptic segmentation of the input image that includes the refined mask assignment for the object. Table 1 Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See Table 1 and form 892. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WOO RHIM whose telephone number is (571)272-6560. The examiner can normally be reached Mon - Fri 9:30 am - 6:00 pm et. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WOO C RHIM/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Aug 17, 2023
Application Filed
Aug 17, 2023
Response after Non-Final Action
Oct 22, 2025
Non-Final Rejection — §102, §103
Jan 23, 2026
Response Filed
Feb 13, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601667
AUTOMATED TURF TESTING APPARATUS AND SYSTEM FOR USING SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12596134
DEVICE, MOVEMENT SPEED ESTIMATION SYSTEM, FEEDING CONTROL SYSTEM, MOVEMENT SPEED ESTIMATION METHOD, AND RECORDING MEDIUM IN WHICH MOVEMENT SPEED ESTIMATION PROGRAM IS STORED
2y 5m to grant Granted Apr 07, 2026
Patent 12591997
ARRANGEMENT DEVICE AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586169
Mass Image Processing Apparatus and Method
2y 5m to grant Granted Mar 24, 2026
Patent 12579607
DEMOSAICING METHOD AND APPARATUS FOR MOIRE REDUCTION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+21.4%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 140 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month