Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 5-8, 10-12, 15, 19 and 30 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ronneberger (“U-Net: Convolutional Networks for Biomedical Image Segmentation” University of Freiburg, 2015)
As for claims 1, Ronneberger teaches
A processor-implemented method comprising:
generating a first mask output from a first mask generation branch of an instance segmentation neural network, based on an input to the instance segmentation neural network (Fig 1, p4 ch 2, 1st level of contracting path, i.e. “first mask generation branch”, generating a 568x568 tile);
generating a second mask output from a second mask generation branch of the instance segmentation neural network, based on the generated first mask output from the first mask generation branch, the second mask generation branch having a lower resolution than the first mask generation branch; (Fig 1, p4 ch 2, 2nd level of contracting path, i.e. “second mask generation branch”, generating the 2802 size output, smaller than the prior 5682 size tile of the 1st level output)
generating a combined mask output based on the generated first mask output from the first mask generation branch and the generated second mask output from the second mask generation branch (Fig 1 p4 ch 2, 1st level of expanding path, combining output of 1st level of contracting path and 2nd level of expanding path which in turn incorporates output from the 2nd level of contracting path; NOTE – for sake of clarity, both the contracting and expanding path levels in Fig 1 are counted from the top of the Figure throughout this Office Action);
generating an output of the instance segmentation neural network, based on the generated combined mask output (Fig 1, “output segmentation map”); and
taking one or more actions based on the generated output of the instance segmentation neural network (the nature of action is not specified by the claim, and understood broadly; for example Ronneberger ch 4 describes evaluating various scores of their segmentation method and compares with other methods, Table 1)
As for independent claim 30, please see discussion of analogous claim 1 above.
As for claim 2, Ronneberger teaches
at least part of the instance segmentation neural network is a convolutional neural network (p4 ch 2 par 1 ln 3, “a convolutional network”)
As for claim 3, Ronneberger teaches
the generated first mask output from the first mask generation branch has a higher resolution than the generated second mask output from the second mask generation branch (Fig 1, p4 ch 2, 1st level of contracting path output has resolution 5682 and 2nd level output 2802)
As for claim 5, Ronneberger teaches
the generated second mask output from the second mask generation branch is based on at least one mask coefficient determined for the first mask generation branch of the instance segmentation neural network (Fig 1, pixels, i.e. “coefficients”, of the contracting path 2nd level output are calculated from the contracting path 1st level output)
As for claim 6, Ronneberger teaches
generating a third mask output from a third mask generation branch, wherein the third mask generation branch has a lower resolution than the second mask generation branch (Fig 1 contracting path 3rd level), and wherein the combined mask output is generated further based on the generated third mask output (Fig 1, expanding path 3rd level incorporates output from contracting path 3rd level, and propagates to 2nd and 1st levels which combine the corresponding outputs)
As for claim 7, Ronneberger teaches
generating a predicted bounding box around a target object in an image, prior to generating the output of the instance segmentation neural network, wherein the output includes the predicted bounding box around the target object and wherein the taking the one or more actions is further based on the predicted bounding box (Fig 1 cropping operators at contracting path level 1 and others, produces a cropped image: the cropped image, or determining the crop area prior to cropping, can be called “a bounding box”; alternatively, Fig 2 left side tile over an image illustrates a different bounding box around a part of the input image)
As for claim 8, Ronneberger teaches
generating an output from a semantic segmentation branch of the instance segmentation neural network, prior to generating the output of the instance segmentation neural network, wherein the output from the semantic segmentation branch identifies portions of the input for which the instance segmentation neural network is to generate the combined mask output (Fig 1 contracting path level 1 and subsequent levels – cropping layer propagates to subsequent outputs)
As for claim 10, Ronneberger teaches
the generating the combined mask output comprises concatenating the generated first mask output from the first mask generation branch with the generated second mask output from the second mask generation branch (Fig 1 “copy and crop” concatenates with “up-conv” layers, p4 ch 2 par 1 ln 9-10 “concatenation with the correspondingly cropped feature map from the contracting path”)
As for claim 11, Ronneberger teaches
the generated combined mask output comprises a mask for each instance of a target object identified in an input image by the instance segmentation neural network (Fig 3 producing different instances of segmented cell objects in the image)
As for claim 12, Ronneberger teaches
the input to the instance segmentation neural network comprises an input image (Fig 1 input image tile) and wherein the generated output of the instance segmentation neural network comprises a boundary of at least one instance of a target object in the input image (Fig 3 illustrates segmented output images)
As for claim 15, Ronneberger teaches
instance segmentation neural network is based on the instance segmentation neural network minimizing a classification loss, the classification loss determined according to Lcls = CE(Y pred,
Ygt) (pg 4 ch 3 Training, eq 1, par 2 ln 2 “cross entropy loss function”), where:
Lcls is the classification loss, (“E”)
CE() is a cross entropy function (the summing operator over the w(x)*log(..)),
Y pred is a predicted classification (log(pl(x)(x) derived at least in part from the softmax function above where pk(x) is the approximated maximum [of possible classifications]), and
Ygt is a ground truth classification (w(x), see equation(2), par above, “weight map for each ground truth segmentation”)
As for claim 19, Ronneberger teaches
the input to the instance segmentation neural network comprises an image for object detection (Fig 1 input image, Fig 3 example image with detectable objects)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
A. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Ronneberger in view of Chandok (Segmentation Models in PyTorch, PYImageSearch 2021)
As for claim 4, Ronneberger teaches
upsampling the generated second mask output from the second mask generation branch [..], prior to generating the combined mask output (Fig 1, expanding path 2nd level incorporates output from contracting path 2nd level, i.e. “generated second math output”; the data is upsampled in the “up-conv 2x2” operation indicated by the up-arrow between the expanding path 2nd level and 1st level)
Ronneberger does not teach, Chandok however, teaches
to a same resolution as the generated first mask output from the first mask generation branch (Chandok Fig 1, the 1st level of expanding path has resolution of 256x256, the same as the resolution at the 1st level of contracting path)
It would have been obvious to one of ordinary skill and creativity in the art at the time of invention, looking at the teachings, suggestions, and the inferences to be expected to draw therefrom the disclosure of Ronneberger to provide an improved U-Net Segmentation method in view of Chandok’s disclosure of similar U-Net Segmentation, however where the output resolution is the same as the input resolution . See MPEP 2143, KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 417 (2007).
One would have been motivated to combine said teachings in order to produce an output resolution equivalent to input, as taught by Chandok. Considering the rationales provided in MPEP 2143 (A-F), we would conclude that the modification to Ronneberger, could be accomplished by replacing the upscaled output resolution to match the input resolution to the method of Ronneberger according to the teaching of Chandok to obtain the invention as specified in the claim. See MPEP 2143 (Rationale D). Further a person of ordinary skill and creativity in the art would have recognized the interchangeability of output resolutions in U-Net segmentation methods. See MPEP 2143 (Rationale B). Additionally, the combination has a reasonable expectation of success in that the modifications can be made by one of ordinary skill and creativity in the art in using conventional and well known (electrical & computer) engineering and/or programming techniques to yield predictable results. See MPEP 2143 (Rationale A).
B. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Ronneberger in view of Sinha (“YOLACT : Real Time Instance segmentation”, medium.com)
As for claim 9, Ronneberger doesn’t teach, Sinha however teaches
the instance segmentation neural network is based on a YOLACT (You Only Look At CoefficientTs) neural network (Sinha par 3 “What is different with YOLACT?”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ronneberger and Sinha, as they both pertain to the art of image segmentation by neural networks. One of ordinary skill in the art at the time of the invention would have been motivated to combine said teachings, in order to increase efficiency of the method, as taught by Sinha (“..YOLACT delivers decent results with a fast, one-stage instance segmentation model”)
C. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Ronneberger in view of Gomez (“Understanding Categorical Cross-Entropy Loss, Binary Cross-Entropy Loss, Softmax Loss, Logistic Loss, Focal Loss and all those confusing names”, Gombru.github.io, 2018)
As for claim 13, Ronneberger doesn’t teach, Gomez however teaches
the generating the output of the instance segmentation neural network is based on the instance segmentation neural network minimizing a mask loss, the mask loss determined according to: Lₘₐₛₖ = BCE(M, Mgt) (pg 8, ch Binary Cross-Entropy Loss, equation 1, “CE = …”), where:
Lₘₐₛₖ is the mask loss, (equation 1 “CE”)
BCE() represents a binary cross entropy function, (chapter title)
M represents a mask (s1: from p 3, par 3: “sj -are the scores inferred by the net for each class”; when applied to image segmentation, s would be the labels indicating the inferred image segment labels, i.e. “masks”), and
Mgt represents a ground truth mask (t1: ground truth label; when applied to image segmentation, s would be the labels indicating the ground-truth image segment labels)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ronneberger and Gomez, as they both pertain to the art of training neural networks using loss functions. One of ordinary skill in the art at the time of the invention would have been motivated to combine said teachings, in order to improve the effectiveness of the model, as taught by Gomez (p 8, “it is used for multi-label classification, were the insight of an element belonging to a certain class should not influence the decision for another class”)
D. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Ronneberger in view of He (“Bounding Box Regression with Uncertainty for Accurate Object Detection”, CVPR 2019)
As for claim 14, Ronneberger doesn’t teach, He however teaches
the generating the output of the instance segmentation neural network is based on the instance segmentation neural network minimizing a box regression loss, the box regression loss determined according to Lb₀ₓ = F(B pred,Bgt) (ch 3.2 eq (5), DKL(..) ), where:
Lbox is the box regression loss, (eq(5) “Lreg”)
F() represents a loss function of the second mask generation branch, (“DKL(..)”)
Bₚᵣₑᵈ represents a predicted box regression (Ptheta(x), see eq(2) – probability distribution of estimated localization), and
Bgt represents a ground truth box regression (“PD(x)”, see eq (3), ground truth bounding box).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ronneberger and He, as they both pertain to the art of training neural networks using loss functions. One of ordinary skill in the art at the time of the invention would have been motivated to combine said teachings, in order to reduce the amount of segmentation processing to an area of the input image corresponding to an object of interest.
E. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Ronneberger in view of Wang, “A novel deep learning-based 3D cell segmentation framework for future image-based disease detection”, Scientific Reports 2022
As for claim 17, Ronneberger doesn’t teach, Wang however teaches
generating a depth prediction prior to generating the output of the instance segmentation neural network (Wang pg 3 Fig 1, Background, Membrane and Cell Foreground can be understood as different “depth”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ronneberger and Wang, as they both pertain to the art of training neural networks using loss functions. One of ordinary skill in the art at the time of the invention would have been motivated to combine said teachings, in order to take advantage of 3-D captured biomedical images to produce more accurate segmentation results.
Allowable Subject Matter
Claims 16 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is the Examiner’s statement of reasons for indicating allowable subject matter:
Features in the claim are not found in prior art, in conjunction with the entire scope of the claim. Specifically,
claim 16 recites an additional parameter W, where W is a weight on each pixel of the input image;
claim 18 recites the deltaBCE function that receives a mask for selected predicted pixels and a mask for selected ground-truth pixels
The previously cited prior art, particularly Gomez, cites a Binary Cross Entropy function however does not teach the additional features discussed above.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Error! Unknown document property name. whose telephone number is Error! Unknown document property name.. The examiner can normally be reached on Error! Unknown document property name..
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached on (571)272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARK ROZ/
Primary Examiner, Art Unit 2669