DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on April 3, 2024 and May 21, 205 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Stephens, Greg J., et al. "Dimensionality and dynamics in the behavior of C. elegans." PLoS computational biology 4.4 (2008): e1000028. (hereinafter Stephens), and further in view of L. Ke, Y. -W. Tai and C. -K. Tang, "Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers," 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 4018-4027, doi: 10.1109/CVPR46437.2021.00401. (including supplemental material file) (hereinafter Ke).
Regarding independent claim 1, Stephens discloses A microscopic image processing method performed by a computer device (page 1, right column, “We use tracking microscopy with high spatial and temporal resolution to extract the two-dimensional shape of individual C. elegans from images of freely moving worms over long periods of time (Figure 1A; see Materials and Methods);” see also “Materials and Methods” section on page 8; page 8, “Images of worms captured by the worm tracker were processed using MATLAB (Mathworks, Natick, MA).”), the method comprising:
obtaining skeleton form information of the target object (Figure 1B, “(B) The curve through the center of the body. The black circle marks the head;” the skeleton form is read as the center line of the body);
performing motion analysis on the target object based on the skeleton form information to obtain a plurality of eigenvalues (page 1, right column, “We use tracking microscopy with high spatial and temporal resolution to extract the two-dimensional shape of individual C. elegans from images of freely moving worms over long periods of time (Figure 1A; see Materials and Methods);” page 2, left column, “Here, we address this imbalance in a domain rich enough to allow complex, natural behavior yet simple enough so that movements can be explored exhaustively: the motions of Caenorhabditis elegans freely crawling on an agar plate. From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state. Our observations of C. elegans reveal a precise and complete language of motion and new aspects of worm behavior;” page 2, right column, “Figure 2A shows the covariance matrix, and its smooth structure is a strong hint that there will be only a small number of significant eigenvalues; this is shown explicitly in Figure 2B. Quantitatively, over 95% of the total variance in angle along the body is accounted for by just four eigenvalues. Note that the contribution of the variance is inhomogeneous along the body curve. For example the fourth eigenworm makes a small contribution to the variance overall, but captures a large percentage of the variance within 5% of the head and tail region (Figure 2D). Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s). If only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes” equation 3); and
determining an eigenvalue sequence comprising the plurality of eigenvalues as motion component information of the target object (page 2, right column, “. We conclude that our four eigenworms provide an effective, low dimensional coordinate system within which to describe C. elegans motor behavior.”).
Stephens fails to explicitly disclose as further recited. However, Ke discloses extracting an instance image of a target object from a microscopic image (NOTE applicants specification defines instance image as “501: The terminal performs instance segmentation on a microscopic image to obtain an instance image, the instance image including a target object in the microscopic image (paragraph 0058).” Figures 2 and 3 show the object images obtained from the original images (i.e. segmented)).
Stephens is directed toward tracking worm movement through image analysis of worms on an agar plate (abstract; page 1, left column; Figure 1). Ke is directed toward “Segmenting highly-overlapping objects is challenging, because typically no distinction is made between real object contours and occlusion boundaries. Unlike previous twostage instance segmentation methods, we model image formation as composition of two overlapping layers, and propose Bilayer Convolutional Network (BCNet), where the top GCN layer detects the occluding objects (occluder) and the bottom GCN layer infers partially occluded instance (occludee) (abstract).” AS can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed inventio, both Stephens and Ke are directed toward similar methods of endeavor of image analysis. Further, it is well known by one of ordinary skill in the art before the effective filing date of the invention that images often contain overlapping entities (or in this case worms); two worms could travel across eachother and overlap tails, then when determining movement features, the system would need to segment the worms for accurate processing. Overlapping of data can prevent accurate image analysis; thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Ke in order to ensure accurate segmentation of the objects in the image leading to an accurate output from the model.
Regarding dependent claim 2, the rejection of claim 1 is incorporated herein. Additionally, Stephens in the combination further discloses wherein the plurality of eigenvalues represent weighting coefficients for synthesizing the skeleton form of the target object in a plurality of preset motion states (page 2, right column, “. We conclude that our four eigenworms provide an effective, low dimensional coordinate system within which to describe C. elegans motor behavior;” page 2, right column, “Figure 2A shows the covariance matrix, and its smooth structure is a strong hint that there will be only a small number of significant eigenvalues; this is shown explicitly in Figure 2B. Quantitatively, over 95% of the total variance in angle along the body is accounted for by just four eigenvalues. Note that the contribution of the variance is inhomogeneous along the body curve. For example the fourth eigenworm makes a small contribution to the variance overall, but captures a large percentage of the variance within 5% of the head and tail region (Figure 2D). Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s). If only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes” equation 3; page 2, left column, “Here, we address this imbalance in a domain rich enough to allow complex, natural behavior yet simple enough so that movements can be explored exhaustively: the motions of Caenorhabditis elegans freely crawling on an agar plate. From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state. Our observations of C. elegans reveal a precise and complete language of motion and new aspects of worm behavior”).
Regarding dependent claim 3, the rejection of claim 1 is incorporated herein. Additionally, Ke in the combination further discloses wherein the instance image comprises a contour image and a mask image of the target object (Figure 4, “The bottom row visualizes squared heatmap of contour and mask predictions by the two GCN layers for the occluder and occludee in the same ROI region specified by the red bounding box, which also makes the final segmentation result of BCNet more explainable than previous methods”); and
the extracting an instance image of a target object from a microscopic image comprises:
determining a region of interest ROI comprising the target object from the microscopic image (Figure 4, “in the same ROI region specified by the red bounding box”); and
performing instance segmentation on the ROI to obtain the contour image and the mask image of the target object (Figure 4, “The bottom row visualizes squared heatmap of contour and mask predictions by the two GCN layers for the occluder and occludee in the same ROI region specified by the red bounding box, which also makes the final segmentation result of BCNet more explainable than previous methods”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporated the teaching of Ke to ensure both contours and masks of the images are generated allowing a user to confirm the two outputs agree, making error detection simpler as well as setting the ROI in the image so only that region is processed into contours and masks, limiting the processing needs and cost.
Regarding dependent claim 4, the rejection of claim 3 is incorporated herein. Additionally, Ke in the combination further discloses wherein when there are a plurality of target objects in the microscopic image, and the ROI comprises the plurality of target objects that overlap each other (page 2, “Our proposed method is robust enough to deal with various occlusion cases, such as highly overlapping zebras and human hands;” see also Figure 4 and 5 showing overlapping objects), the performing instance segmentation on the ROI to obtain the contour image and the mask image of the target object comprises:
determining a ROI candidate frame based on position information of the ROI, a region selected by the ROI candidate frame comprising the ROI (Figure 2, element j determining bounding boxes for the ROIs; position is read as bounding box location);
determining a local image feature of the ROI from a global image feature of the microscopic image, the local image feature representing a feature of the region selected by the ROI candidate frame in the global image feature (Figure 4, feature map and ROI features used for further processing); and
inputting the local image feature into a bilayer instance segmentation model, to process the local image feature through the bilayer instance segmentation model (abstract “Bilayer Convolutional Network (BCNet), where the top GCN layer detects the occluding objects (occluder) and the bottom GCN layer infers partially occluded instance (occludee).”), and output respective contour images and mask images of the plurality of target objects in the ROI (Figure 7, “The bottom row visualizes squared heatmap of contour and mask predictions by the two GCN layers for the occluder and occludee in the same ROI region specified by the red bounding box, which also makes the final segmentation result of BCNet more explainable than previous methods. More qualitative results are available in the supplementary file.”), the bilayer instance segmentation model being used for respectively establishing layers for different objects to obtain an instance segmentation result of each object (Figure 1).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporated the teaching of Ke to ensure accurate segmentation is performed for objects that overlap within an image. Without bilayer segmentation, areas of objects could be missing, or inaccurate.
Regarding dependent claim 5, the rejection of claim 1 is incorporated herein. Additionally, Stephens in the combination further discloses wherein the obtaining skeleton form information of the target object from the instance image comprises:
to obtain a skeleton form image of the target object, the skeleton extraction model being used for predicting a skeleton form of a target object based on an instance image of the target object (Figure 1B, “(B) The curve through the center of the body. The black circle marks the head”);
recognizing a head endpoint (Figure 1B, “(B) The curve through the center of the body. The black circle marks the head”) and a tail endpoint in a skeleton form of the target object in the skeleton form image (Figure 1B, “(B) The curve through the center of the body;” the tail is read as the end of the center line; page 1, right column, “Variations in the thickness of the worm are small, so we describe the shape by a curve that passes through the center of the body (Figure 1B). We measure position along this curve (arc length) by the variable s, normalized so that s = 0 is the head and s = 1 is the tail.”); and
determining the skeleton form image, the head endpoint, and the tail endpoint as the skeleton form information (Figure 1B, “(B) The curve through the center of the body. The black circle marks the head;” the tail is read as the end of the center line).
As noted above, Stephens fails to explicitly disclose operations as related to the instance image (such as, “inputting the instance image into a skeleton extraction model for any target object in a ROI”). However, Stephens does disclose determining a skeleton model based on an image, and Ke allows for determining the image as an instance image (NOTE applicants specification defines instance image as “501: The terminal performs instance segmentation on a microscopic image to obtain an instance image, the instance image including a target object in the microscopic image (paragraph 0058).” Figures 2 and 3 show the object images obtained from the original images (i.e. segmented)). Thus, it would have been obvious to a person having ordinary skill in the art to incorporate the teaching of Ke in order to be able to generate skeleton images based on other input image types, making the system more widely applicable.
Regarding dependent claim 6, the rejection of claim 1 is incorporated herein. Additionally, Stephens in the combination further discloses wherein the performing motion analysis on the target object based on the skeleton form information to obtain a plurality of eigenvalues comprises:
sampling the skeleton form of the target object based on the skeleton form information to obtain an eigenvector formed by respective skeleton tangential angles of a plurality of sampling points (Figure 1C, “(C) Distances along the curve (arclength s) are measured in normalized units, and we define the tangent ^t(s) and normal ^n(s) to the curve at each point. The tangent points in a direction h(s), and variations in this angle correspond to the curvature k(s) = dh(s)/ ds”), the skeleton tangential angle representing an angle between a tangent line corresponding to the sampling point as a tangent point and the horizontal line on the directed skeleton form from a head endpoint to a tail endpoint (Figure 1C, “(C) Distances along the curve (arclength s) are measured in normalized units, and we define the tangent ^t(s) and normal ^n(s) to the curve at each point;” page 1, right column, “We analyze the worm’s shapes in a way intrinsic to its own behavior, not to our arbitrary choice of coordinates (Figure 1). The intrinsic geometry of a curve in the plane is defined by the Frenet equations [14,15], dx(s) ds ~^t(s) ð1Þ dx(s) ds ~k(s)^n(s) ð2Þ where ^t(s) is the unit tangent vector to the curve, ^n(s) is the unit normal to the curve, and k(s) is the scalar curvature.”);
separately sampling preset skeleton forms (read as the forms of the multiple worms) indicated by the plurality of preset motion states, to obtain respective preset eigenvectors of the plurality of preset motion states (page 2, left column, “From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state.”); and
decomposing the eigenvector into a sum of products of the plurality of preset eigenvectors and the plurality of eigenvalues to obtain the plurality of eigenvalues (age 2, left column, “From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state;” decomposing of eigenvectors into a sum of products is well known in the art to represent the fundamental components by simplifying the matrix).
Regarding dependent claim 7, the rejection of claim 6 is incorporated herein. Additionally, Stephens in the combination further discloses wherein the method further comprises:
and determining a preset motion state corresponding to an eigenvalue in a top target position in the descending order as a motion principal component (page 2, right column “Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s).”); and
analyzing motion of the target object in an observation period based on the motion principal component (page 2, right column “Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s). If only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes,”), to obtain a kinematic feature of the target object in the observation period (page 2, right column, “if only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes,”).
Both Stephens and Ke in the combination fail to explicitly disclose sorting the plurality of eigenvalues in the eigenvalue sequence in a descending order. However, Stephens does disclose determining the principal component associated with the values. Determining this principal component is well known to be a measure of importance; thus sorting them in a descending order is read as sorting them based on importance. Further, those that are most important, are read as the principal components, of which Stephens determined there were 4. Thus, though the sorting is not explicitly stated, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to sort the data prior to determining the principal components making the comparisons easier.
Regarding independent claim 8, the rejection of claim 1 applies directly. Additionally, Stephens discloses A computer device (page 1, right column, “We use tracking microscopy with high spatial and temporal resolution to extract the two-dimensional shape of individual C. elegans from images of freely moving worms over long periods of time (Figure 1A; see Materials and Methods);” see also “Materials and Methods” section on page 8; page 8, “Images of worms captured by the worm tracker were processed using MATLAB (Mathworks, Natick, MA).”), comprising one or more processors and one or more memories, the one or more memories storing at least one computer program, and the at least one computer program being loaded and executed by the one or more processors and causing the computer device to implement a microscopic image processing method (page 8, “Images of worms captured by the worm tracker were processed using MATLAB (Mathworks, Natick, MA);” in order to implement MATLAB a computer is needed, further calling a program requires both memory and the program itself) including:
obtaining skeleton form information of the target object (Figure 1B, “(B) The curve through the center of the body. The black circle marks the head;” the skeleton form is read as the center line of the body);
performing motion analysis on the target object based on the skeleton form information to obtain a plurality of eigenvalues (page 1, right column, “We use tracking microscopy with high spatial and temporal resolution to extract the two-dimensional shape of individual C. elegans from images of freely moving worms over long periods of time (Figure 1A; see Materials and Methods);” page 2, left column, “Here, we address this imbalance in a domain rich enough to allow complex, natural behavior yet simple enough so that movements can be explored exhaustively: the motions of Caenorhabditis elegans freely crawling on an agar plate. From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state. Our observations of C. elegans reveal a precise and complete language of motion and new aspects of worm behavior;” page 2, right column, “Figure 2A shows the covariance matrix, and its smooth structure is a strong hint that there will be only a small number of significant eigenvalues; this is shown explicitly in Figure 2B. Quantitatively, over 95% of the total variance in angle along the body is accounted for by just four eigenvalues. Note that the contribution of the variance is inhomogeneous along the body curve. For example the fourth eigenworm makes a small contribution to the variance overall, but captures a large percentage of the variance within 5% of the head and tail region (Figure 2D). Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s). If only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes” equation 3); and
determining an eigenvalue sequence comprising the plurality of eigenvalues as motion component information of the target object (page 2, right column, “. We conclude that our four eigenworms provide an effective, low dimensional coordinate system within which to describe C. elegans motor behavior.”).
Stephens fails to explicitly disclose as further recited. However, Ke discloses extracting an instance image of a target object from a microscopic image (NOTE applicants specification defines instance image as “501: The terminal performs instance segmentation on a microscopic image to obtain an instance image, the instance image including a target object in the microscopic image (paragraph 0058).” Figures 2 and 3 show the object images obtained from the original images (i.e. segmented)).
Stephens is directed toward tracking worm movement through image analysis of worms on an agar plate (abstract; page 1, left column; Figure 1). Ke is directed toward “Segmenting highly-overlapping objects is challenging, because typically no distinction is made between real object contours and occlusion boundaries. Unlike previous twostage instance segmentation methods, we model image formation as composition of two overlapping layers, and propose Bilayer Convolutional Network (BCNet), where the top GCN layer detects the occluding objects (occluder) and the bottom GCN layer infers partially occluded instance (occludee) (abstract).” AS can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed inventio, both Stephens and Ke are directed toward similar methods of endeavor of image analysis. Further, it is well known by one of ordinary skill in the art before the effective filing date of the invention that images often contain overlapping entities (or in this case worms); two worms could travel across eachother and overlap tails, then when determining movement features, the system would need to segment the worms for accurate processing. Overlapping of data can prevent accurate image analysis; thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Ke in order to ensure accurate segmentation of the objects in the image leading to an accurate output from the model.
Regarding dependent claim 9, the rejection of claim 8 is incorporated herein. Additionally, Stephens in the combination further discloses wherein the plurality of eigenvalues represent weighting coefficients for synthesizing the skeleton form of the target object in a plurality of preset motion states (page 2, right column, “. We conclude that our four eigenworms provide an effective, low dimensional coordinate system within which to describe C. elegans motor behavior;” page 2, right column, “Figure 2A shows the covariance matrix, and its smooth structure is a strong hint that there will be only a small number of significant eigenvalues; this is shown explicitly in Figure 2B. Quantitatively, over 95% of the total variance in angle along the body is accounted for by just four eigenvalues. Note that the contribution of the variance is inhomogeneous along the body curve. For example the fourth eigenworm makes a small contribution to the variance overall, but captures a large percentage of the variance within 5% of the head and tail region (Figure 2D). Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s). If only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes” equation 3; page 2, left column, “Here, we address this imbalance in a domain rich enough to allow complex, natural behavior yet simple enough so that movements can be explored exhaustively: the motions of Caenorhabditis elegans freely crawling on an agar plate. From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state. Our observations of C. elegans reveal a precise and complete language of motion and new aspects of worm behavior”).
Regarding dependent claim 10, the rejection of claim 8 is incorporated herein. Additionally, Ke in the combination further discloses wherein the instance image comprises a contour image and a mask image of the target object (Figure 4, “The bottom row visualizes squared heatmap of contour and mask predictions by the two GCN layers for the occluder and occludee in the same ROI region specified by the red bounding box, which also makes the final segmentation result of BCNet more explainable than previous methods”); and
the extracting an instance image of a target object from a microscopic image comprises:
determining a region of interest ROI comprising the target object from the microscopic image (Figure 4, “in the same ROI region specified by the red bounding box”); and
performing instance segmentation on the ROI to obtain the contour image and the mask image of the target object (Figure 4, “The bottom row visualizes squared heatmap of contour and mask predictions by the two GCN layers for the occluder and occludee in the same ROI region specified by the red bounding box, which also makes the final segmentation result of BCNet more explainable than previous methods”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to ensure both contours and masks of the images are generated allowing a user to confirm the two outputs agree, making error detection simpler as well as setting the ROI in the image so only that region is processed into contours and masks, limiting the processing needs and cost.
Regarding dependent claim 11, the rejection of claim 10 is incorporated herein. Additionally, Ke in the combination further discloses wherein when there are a plurality of target objects in the microscopic image, and the ROI comprises the plurality of target objects that overlap each other (page 2, “Our proposed method is robust enough to deal with various occlusion cases, such as highly overlapping zebras and human hands;” see also Figure 4 and 5 showing overlapping objects), the performing instance segmentation on the ROI to obtain the contour image and the mask image of the target object comprises:
determining a ROI candidate frame based on position information of the ROI, a region selected by the ROI candidate frame comprising the ROI (Figure 2, element j determining bounding boxes for the ROIs; position is read as bounding box location);
determining a local image feature of the ROI from a global image feature of the microscopic image, the local image feature representing a feature of the region selected by the ROI candidate frame in the global image feature (Figure 4, feature map and ROI features used for further processing); and
inputting the local image feature into a bilayer instance segmentation model, to process the local image feature through the bilayer instance segmentation model (abstract “Bilayer Convolutional Network (BCNet), where the top GCN layer detects the occluding objects (occluder) and the bottom GCN layer infers partially occluded instance (occludee).”), and output respective contour images and mask images of the plurality of target objects in the ROI (Figure 7, “The bottom row visualizes squared heatmap of contour and mask predictions by the two GCN layers for the occluder and occludee in the same ROI region specified by the red bounding box, which also makes the final segmentation result of BCNet more explainable than previous methods. More qualitative results are available in the supplementary file.”), the bilayer instance segmentation model being used for respectively establishing layers for different objects to obtain an instance segmentation result of each object (Figure 1).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporated the teaching of Ke to ensure accurate segmentation is performed for objects that overlap within an image. Without bilayer segmentation, areas of objects could be missing, or inaccurate.
Regarding dependent claim 12, the rejection of claim 8 is incorporated herein. Additionally, Stephens in the combination further discloses wherein the obtaining skeleton form information of the target object from the instance image comprises:
to obtain a skeleton form image of the target object, the skeleton extraction model being used for predicting a skeleton form of a target object based on an instance image of the target object (Figure 1B, “(B) The curve through the center of the body. The black circle marks the head”);
recognizing a head endpoint (Figure 1B, “(B) The curve through the center of the body. The black circle marks the head”) and a tail endpoint in a skeleton form of the target object in the skeleton form image (Figure 1B, “(B) The curve through the center of the body;” the tail is read as the end of the center line; page 1, right column, “Variations in the thickness of the worm are small, so we describe the shape by a curve that passes through the center of the body (Figure 1B). We measure position along this curve (arc length) by the variable s, normalized so that s = 0 is the head and s = 1 is the tail.”); and
determining the skeleton form image, the head endpoint, and the tail endpoint as the skeleton form information (Figure 1B, “(B) The curve through the center of the body. The black circle marks the head;” the tail is read as the end of the center line).
As noted above, Stephens fails to explicitly disclose operations as related to the instance image (such as, “inputting the instance image into a skeleton extraction model for any target object in a ROI”). However, Stephens does disclose determining a skeleton model based on an image, and Ke allows for determining the image as an instance image (NOTE applicants specification defines instance image as “501: The terminal performs instance segmentation on a microscopic image to obtain an instance image, the instance image including a target object in the microscopic image (paragraph 0058).” Figures 2 and 3 show the object images obtained from the original images (i.e. segmented)). Thus, it would have been obvious to a person having ordinary skill in the art to incorporate the teaching of Ke in order to be able to generate skeleton images based on other input image types, making the system more widely applicable.
Regarding dependent claim 13, the rejection of claim 8 is incorporated herein. Additionally, Stephens in the combination further discloses wherein the performing motion analysis on the target object based on the skeleton form information to obtain a plurality of eigenvalues comprises:
sampling the skeleton form of the target object based on the skeleton form information to obtain an eigenvector formed by respective skeleton tangential angles of a plurality of sampling points (Figure 1C, “(C) Distances along the curve (arclength s) are measured in normalized units, and we define the tangent ^t(s) and normal ^n(s) to the curve at each point. The tangent points in a direction h(s), and variations in this angle correspond to the curvature k(s) = dh(s)/ ds”), the skeleton tangential angle representing an angle between a tangent line corresponding to the sampling point as a tangent point and the horizontal line on the directed skeleton form from a head endpoint to a tail endpoint (Figure 1C, “(C) Distances along the curve (arclength s) are measured in normalized units, and we define the tangent ^t(s) and normal ^n(s) to the curve at each point;” page 1, right column, “We analyze the worm’s shapes in a way intrinsic to its own behavior, not to our arbitrary choice of coordinates (Figure 1). The intrinsic geometry of a curve in the plane is defined by the Frenet equations [14,15], dx(s) ds ~^t(s) ð1Þ dx(s) ds ~k(s)^n(s) ð2Þ where ^t(s) is the unit tangent vector to the curve, ^n(s) is the unit normal to the curve, and k(s) is the scalar curvature.”);
separately sampling preset skeleton forms (read as the forms of the multiple worms) indicated by the plurality of preset motion states, to obtain respective preset eigenvectors of the plurality of preset motion states (page 2, left column, “From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state.”); and
decomposing the eigenvector into a sum of products of the plurality of preset eigenvectors and the plurality of eigenvalues to obtain the plurality of eigenvalues (age 2, left column, “From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state;” decomposing of eigenvectors into a sum of products is well known in the art to represent the fundamental components by simplifying the matrix).
Regarding dependent claim 14, the rejection of claim 13 is incorporated herein. Additionally, Stephens in the combination further discloses wherein the method further comprises:
and determining a preset motion state corresponding to an eigenvalue in a top target position in the descending order as a motion principal component (page 2, right column “Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s).”); and
analyzing motion of the target object in an observation period based on the motion principal component (page 2, right column “Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s). If only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes,”), to obtain a kinematic feature of the target object in the observation period (page 2, right column, “if only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes,”).
Both Stephens and Ke in the combination fail to explicitly disclose sorting the plurality of eigenvalues in the eigenvalue sequence in a descending order. However, Stephens does disclose determining the principal component associated with the values. Determining this principal component is well known to be a measure of importance; thus sorting them in a descending order is read as sorting them based on importance. Further, those that are most important, are read as the principal components, of which Stephens determined there were 4. Thus, though the sorting is not explicitly stated, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to sort the data prior to determining the principal components making the comparisons easier.
Regarding independent claim 15, the rejection of claim 1 applies directly. Additionally, Stephens discloses A non-transitory computer-readable storage medium, storing at least one computer program, the at least one computer program being loaded and executed by a processor of a computer device and causing the computer device to implement a microscopic image processing method (page 1, right column, “We use tracking microscopy with high spatial and temporal resolution to extract the two-dimensional shape of individual C. elegans from images of freely moving worms over long periods of time (Figure 1A; see Materials and Methods);” see also “Materials and Methods” section on page 8; page 8, “Images of worms captured by the worm tracker were processed using MATLAB (Mathworks, Natick, MA);” in order to implement MATLAB a computer is needed, further calling a program requires both memory and the program itself) including:
obtaining skeleton form information of the target object (Figure 1B, “(B) The curve through the center of the body. The black circle marks the head;” the skeleton form is read as the center line of the body);
performing motion analysis on the target object based on the skeleton form information to obtain a plurality of eigenvalues (page 1, right column, “We use tracking microscopy with high spatial and temporal resolution to extract the two-dimensional shape of individual C. elegans from images of freely moving worms over long periods of time (Figure 1A; see Materials and Methods);” page 2, left column, “Here, we address this imbalance in a domain rich enough to allow complex, natural behavior yet simple enough so that movements can be explored exhaustively: the motions of Caenorhabditis elegans freely crawling on an agar plate. From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state. Our observations of C. elegans reveal a precise and complete language of motion and new aspects of worm behavior;” page 2, right column, “Figure 2A shows the covariance matrix, and its smooth structure is a strong hint that there will be only a small number of significant eigenvalues; this is shown explicitly in Figure 2B. Quantitatively, over 95% of the total variance in angle along the body is accounted for by just four eigenvalues. Note that the contribution of the variance is inhomogeneous along the body curve. For example the fourth eigenworm makes a small contribution to the variance overall, but captures a large percentage of the variance within 5% of the head and tail region (Figure 2D). Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s). If only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes” equation 3); and
determining an eigenvalue sequence comprising the plurality of eigenvalues as motion component information of the target object (page 2, right column, “. We conclude that our four eigenworms provide an effective, low dimensional coordinate system within which to describe C. elegans motor behavior.”).
Stephens fails to explicitly disclose as further recited. However, Ke discloses extracting an instance image of a target object from a microscopic image (NOTE applicants specification defines instance image as “501: The terminal performs instance segmentation on a microscopic image to obtain an instance image, the instance image including a target object in the microscopic image (paragraph 0058).” Figures 2 and 3 show the object images obtained from the original images (i.e. segmented)).
Stephens is directed toward tracking worm movement through image analysis of worms on an agar plate (abstract; page 1, left column; Figure 1). Ke is directed toward “Segmenting highly-overlapping objects is challenging, because typically no distinction is made between real object contours and occlusion boundaries. Unlike previous twostage instance segmentation methods, we model image formation as composition of two overlapping layers, and propose Bilayer Convolutional Network (BCNet), where the top GCN layer detects the occluding objects (occluder) and the bottom GCN layer infers partially occluded instance (occludee) (abstract).” AS can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed inventio, both Stephens and Ke are directed toward similar methods of endeavor of image analysis. Further, it is well known by one of ordinary skill in the art before the effective filing date of the invention that images often contain overlapping entities (or in this case worms); two worms could travel across eachother and overlap tails, then when determining movement features, the system would need to segment the worms for accurate processing. Overlapping of data can prevent accurate image analysis; thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Ke in order to ensure accurate segmentation of the objects in the image leading to an accurate output from the model.
Regarding dependent claim 16, the rejection of claim 15 is incorporated herein. Additionally, Stephens in the combination further discloses wherein the plurality of eigenvalues represent weighting coefficients for synthesizing the skeleton form of the target object in a plurality of preset motion states (page 2, right column, “. We conclude that our four eigenworms provide an effective, low dimensional coordinate system within which to describe C. elegans motor behavior;” page 2, right column, “Figure 2A shows the covariance matrix, and its smooth structure is a strong hint that there will be only a small number of significant eigenvalues; this is shown explicitly in Figure 2B. Quantitatively, over 95% of the total variance in angle along the body is accounted for by just four eigenvalues. Note that the contribution of the variance is inhomogeneous along the body curve. For example the fourth eigenworm makes a small contribution to the variance overall, but captures a large percentage of the variance within 5% of the head and tail region (Figure 2D). Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s). If only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes” equation 3; page 2, left column, “Here, we address this imbalance in a domain rich enough to allow complex, natural behavior yet simple enough so that movements can be explored exhaustively: the motions of Caenorhabditis elegans freely crawling on an agar plate. From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state. Our observations of C. elegans reveal a precise and complete language of motion and new aspects of worm behavior”).
Regarding dependent claim 17, the rejection of claim 15 is incorporated herein. Additionally, Ke in the combination further discloses wherein the instance image comprises a contour image and a mask image of the target object (Figure 4, “The bottom row visualizes squared heatmap of contour and mask predictions by the two GCN layers for the occluder and occludee in the same ROI region specified by the red bounding box, which also makes the final segmentation result of BCNet more explainable than previous methods”); and
the extracting an instance image of a target object from a microscopic image comprises:
determining a region of interest ROI comprising the target object from the microscopic image (Figure 4, “in the same ROI region specified by the red bounding box”); and
performing instance segmentation on the ROI to obtain the contour image and the mask image of the target object (Figure 4, “The bottom row visualizes squared heatmap of contour and mask predictions by the two GCN layers for the occluder and occludee in the same ROI region specified by the red bounding box, which also makes the final segmentation result of BCNet more explainable than previous methods”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to ensure both contours and masks of the images are generated allowing a user to confirm the two outputs agree, making error detection simpler as well as setting the ROI in the image so only that region is processed into contours and masks, limiting the processing needs and cost.
Regarding dependent claim 18, the rejection of claim 17 is incorporated herein. Additionally, Ke in the combination further discloses wherein when there are a plurality of target objects in the microscopic image, and the ROI comprises the plurality of target objects that overlap each other (page 2, “Our proposed method is robust enough to deal with various occlusion cases, such as highly overlapping zebras and human hands;” see also Figure 4 and 5 showing overlapping objects), the performing instance segmentation on the ROI to obtain the contour image and the mask image of the target object comprises:
determining a ROI candidate frame based on position information of the ROI, a region selected by the ROI candidate frame comprising the ROI (Figure 2, element j determining bounding boxes for the ROIs; position is read as bounding box location);
determining a local image feature of the ROI from a global image feature of the microscopic image, the local image feature representing a feature of the region selected by the ROI candidate frame in the global image feature (Figure 4, feature map and ROI features used for further processing); and
inputting the local image feature into a bilayer instance segmentation model, to process the local image feature through the bilayer instance segmentation model (abstract “Bilayer Convolutional Network (BCNet), where the top GCN layer detects the occluding objects (occluder) and the bottom GCN layer infers partially occluded instance (occludee).”), and output respective contour images and mask images of the plurality of target objects in the ROI (Figure 7, “The bottom row visualizes squared heatmap of contour and mask predictions by the two GCN layers for the occluder and occludee in the same ROI region specified by the red bounding box, which also makes the final segmentation result of BCNet more explainable than previous methods. More qualitative results are available in the supplementary file.”), the bilayer instance segmentation model being used for respectively establishing layers for different objects to obtain an instance segmentation result of each object (Figure 1).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporated the teaching of Ke to ensure accurate segmentation is performed for objects that overlap within an image. Without bilayer segmentation, areas of objects could be missing, or inaccurate.
Regarding dependent claim 19, the rejection of claim 15 is incorporated herein. Additionally, Stephens in the combination further discloses wherein the performing motion analysis on the target object based on the skeleton form information to obtain a plurality of eigenvalues comprises:
sampling the skeleton form of the target object based on the skeleton form information to obtain an eigenvector formed by respective skeleton tangential angles of a plurality of sampling points (Figure 1C, “(C) Distances along the curve (arclength s) are measured in normalized units, and we define the tangent ^t(s) and normal ^n(s) to the curve at each point. The tangent points in a direction h(s), and variations in this angle correspond to the curvature k(s) = dh(s)/ ds”), the skeleton tangential angle representing an angle between a tangent line corresponding to the sampling point as a tangent point and the horizontal line on the directed skeleton form from a head endpoint to a tail endpoint (Figure 1C, “(C) Distances along the curve (arclength s) are measured in normalized units, and we define the tangent ^t(s) and normal ^n(s) to the curve at each point;” page 1, right column, “We analyze the worm’s shapes in a way intrinsic to its own behavior, not to our arbitrary choice of coordinates (Figure 1). The intrinsic geometry of a curve in the plane is defined by the Frenet equations [14,15], dx(s) ds ~^t(s) ð1Þ dx(s) ds ~k(s)^n(s) ð2Þ where ^t(s) is the unit tangent vector to the curve, ^n(s) is the unit normal to the curve, and k(s) is the scalar curvature.”);
separately sampling preset skeleton forms (read as the forms of the multiple worms) indicated by the plurality of preset motion states, to obtain respective preset eigenvectors of the plurality of preset motion states (page 2, left column, “From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state.”); and
decomposing the eigenvector into a sum of products of the plurality of preset eigenvectors and the plurality of eigenvalues to obtain the plurality of eigenvalues (age 2, left column, “From measurements of the worm’s curvature, we show that the space of natural worm postures is low dimensional and can be almost completely described by their projections along four principal ‘‘eigenworms.’’ The dynamics along these eigenworms offer both a quantitative characterization of classical worm movement such as forward crawling, reversals, and Omegaturns, and evidence of more subtle behaviors such as pause states at particular postures. We can partially construct equations of motion for this shape space, and within these dynamics we find a set of attractors that can be used as a rigorous definition of behavioral state;” decomposing of eigenvectors into a sum of products is well known in the art to represent the fundamental components by simplifying the matrix).
Regarding dependent claim 20, the rejection of claim 19 is incorporated herein. Additionally, Stephens in the combination further discloses wherein the method further comprises:
and determining a preset motion state corresponding to an eigenvalue in a top target position in the descending order as a motion principal component (page 2, right column “Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s).”); and
analyzing motion of the target object in an observation period based on the motion principal component (page 2, right column “Associated with each of the eigenvalues lm is an eigenvector um(s), sometimes referred to as a ‘principal component’ of the function h(s). If only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes,”), to obtain a kinematic feature of the target object in the observation period (page 2, right column, “if only K = 4 eigenvalues are significant, then we can write the shape of the worm as a superposition of ‘eigenworm’ shapes,”).
Both Stephens and Ke in the combination fail to explicitly disclose sorting the plurality of eigenvalues in the eigenvalue sequence in a descending order. However, Stephens does disclose determining the principal component associated with the values. Determining this principal component is well known to be a measure of importance; thus sorting them in a descending order is read as sorting them based on importance. Further, those that are most important, are read as the principal components, of which Stephens determined there were 4. Thus, though the sorting is not explicitly stated, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to sort the data prior to determining the principal components making the comparisons easier.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
U.S. Patent No. 9,129,379 discloses, “A method and an apparatus for bilayer image segmentation are described (abstract)”
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Courtney J. Nelson whose telephone number is (571)272-3956. The examiner can normally be reached Monday - Friday 8:00 - 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/COURTNEY JOAN NELSON/Primary Examiner, Art Unit 2661