DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged.
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The abstract of the disclosure is objected to because the abstract recites phrases which can be implied, e.g. “Disclosed is…”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
The disclosure is objected to because of the following informalities:
See specification [0017] and [0053], which recites “classifying unclassified basic markings into the cluster if differences between offset vectors of the unclassified basic markings and an average value of offset vectors in the cluster”; a typographical error is assumed to exist, as the recited if statement does not appear to be complete.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 3 recites in the body of the claim, “classifying unclassified basic markings into the cluster if differences between offset vectors of the unclassified basic markings and an average value of offset vectors in the cluster”; the recited if statement does not appear to be complete.
For the purposes of further treating the Application on the merits, the Examiner assumes that broadest reasonable interpretation in light of the specification, where unclassified basic markings are classified into the cluster according to differences between offset vectors of the unclassified basic markings and an average value of offset vectors in the cluster.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-10 are rejected under 35 U.S.C. 103 as being unpatentable over De Haan (US 2016/0171684) in view of Tokunaga et al. (US 2012/0308144), herein Tokunaga.
Regarding claim 1, De Haan discloses an image-based motion detection method, comprising:
acquiring a reference image of a detecting object (see De Haan [0066], where a sequence of image frames acquired over time are obtained), determining several first detecting points in the reference image, extracting basic markings centered on the first detecting points in the reference image (see De Haan [0070] and [0077], where the image frame is segmented into smaller segments and Harris corner detector is used to detect trackable interest points) and classifying all the basic markings into several categories, wherein each category comprises at least one basic marking (see De Haan [0091], where the points are separated into three different types using a density threshold into cluster core points, border points and noise points);
acquiring a detecting image of the detecting object (see De Haan [0066] and [0078], where a sequence of image frames acquired over time are obtained and remaining images are used to track the detected points in the initial frame),
matching the basic markings in the detecting image with the basic markings in the reference image, obtaining an offset vector of each basic marking between the reference image and the detecting image (see De Haan [0078], where a displacement vector between consecutive images is obtained for finding the trajectory of the tracked points).
De Haan does not explicitly disclose wherein the detecting image and the reference image comprise the same image parameters and the image parameters comprise position, direction, size and resolution;
determining whether a norm of the offset vector of each basic marking is greater than a second threshold, if yes, determining that the basic marking has moved; and if no, determining that the basic marking has not moved;
determining whether the number of the basic markings that have moved in each category is greater than a third threshold, if yes, determining that the category has moved; and if no, determining that the category has not moved; and
determining a whole moving state and a part moving state of the detecting object according to a moving state of each category.
Tokunaga teaches in a related and pertinent device and method for obtaining the distance between a local motion vector and classifying the local motion vector into a cluster (see Tokunaga Abstract), where a reference image and a current image is received (see Tokunaga [0072]), where the current image and reference image are caused to have the same resolution and each divided into macroblocks, each having m pixels x m pixels (see Tokunaga [0076]-[0077]), where matching blocks are searched by comparing macroblocks of the current image with each macroblock of the reference image and obtains a vector derived from a relation between the block location of the current image and the block location of the reference image as a motion vector of the macroblock of the current image, and local motion vectors per macroblock are obtained (see Tokunaga [0077]), the motion vectors are clustered into a cluster to which the closes vector belongs based on the obtained distance information and an average value is calculated of respective local motion vectors belonging to the clusters (see Tokunaga [0086]-[0087]), and global motion vector is determined based on the average values of respective clusters and the number of elements of the local motion vectors of each cluster that are used to calculate the average values, and a representative vector of the cluster is output as the global motion vector (see Tokunaga [0089]).
At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Tokunaga to the teachings of De Haan, such that the image sequences obtained have the same resolution and size, and that the displacement vectors of the tracked points are further clustered and used to determine a representative vector of the cluster as the global motion vector according to the average values of respective clusters and the number of elements of the local motion vectors of each cluster.
This modification is rationalized as an application of a known technique to a known method
ready for improvement to yield predictable results.
In this instance, De Haan disclose a base method for skin detection which segments image sequences, detects trackable interest points, tracks the interest points across consecutive images to determine displacement vectors, and further classifies interest points into three different types.
Tokunaga teaches a known technique of using received reference and current images with the same resolution and size to divide into macroblocks and determining local motion vectors from matching macroblocks, where the local motion vectors are further clustered based on obtained distance information and an average value is calculated of respective local motion vectors belonging to the clusters, and determining global motion vector based on the average values of respective clusters and the number of elements of the local motion vectors of each cluster that are used to calculate the average values.
One of ordinary skill in the art would have recognized that by applying Tokunaga’s technique would allow for the method of De Haan to also use image sequences having the same resolution and size, and that the displacement vectors of the tracked points are further clustered and used to determine a representative vector of the cluster as the global motion vector according to the average values of respective clusters and the number of elements of the local motion vectors of each cluster,
predictably leading to an improved skin detection method which further determines and tracks local and global motion information.
While De Haan and Tokunaga do not explicitly disclose determining whether a norm of the offset vector of each basic marking is greater than a second threshold, if yes, determining that the basic marking has moved; and if no, determining that the basic marking has not moved; and determining whether the number of the basic markings that have moved in each category is greater than a third threshold, if yes, determining that the category has moved; and if no, determining that the category has not moved; the combined teachings of the cited prior art, notably De Haan and Tokunaga’s suggested teachings for determining a displacement vector and local motion vector of tracked interest points implicitly suggests to one of ordinary skill in the art that if a tracked interest point has a displacement vector / local motion vector with a magnitude greater than a value, e.g. 0, then the tracked interest point is understood to have moved. Similarly, the combined teachings of the cited prior art, notably Tokunaga’s suggested teachings for determining a global motion vector based on the average values of respective clusters and the number of elements of the local motion vectors of each cluster that are used to calculate the average values implicitly suggests to one of ordinary skill in the art that if a cluster with a number of local motion vector elements is greater than a value, e.g. 1, then the corresponding cluster is understood to have moved. See MPEP 2144.01.
Regarding claim 2, please see the above rejection of claim 1. De Haan and Tokunaga disclose the image-based motion detection method of claim 1, further comprising:
clustering all basic markings of each category to obtain several clusters (see De Haan [0091], where a DBSCAN clustering method is performed to separate the points into three different types using a density threshold into cluster core points, border points and noise points; see Tokunaga [0086]-[0087], the motion vectors are clustered into a cluster to which the closes vector belongs based on the obtained distance information and an average value is calculated of respective local motion vectors belonging to the clusters);
recording one cluster comprising the largest number of the basic markings as Cmax (see Tokunaga [0114], where the representative vector having the average value of the cluster having the largest number of elements of the cluster is output as the global motion vector (GMV));
in response that the proportion of the number of basic markings in Cmax to the number of basic markings in the category of Cmax is not less than a first threshold, calculating an average value
X
-
of offset vector of all the basic markings in Cmax; and determining
X
-
as the offset vector of basic markings in the other clusters of the category of Cmax (see Tokunaga [0089] and [0114], where a representative vector of the cluster is output as the global motion vector based on the average value of the cluster the number of elements of the cluster).
Regarding claim 3, please see the above rejection of claim 2. De Haan and Tokunaga disclose the image-based motion detection method of claim 2, wherein the operation of clustering all basic markings of each category specifically comprises:
classifying two basic markings into a cluster if a norm of a difference between offset vectors of the two basic markings is less than a fifth threshold (see Tokunaga [0086], where a distance between a local motion vector and a representative vector of each of a predetermined of clusters is calculated and the motion vectors are clustered into a cluster to which the closes vector belongs based on the obtained distance; suggesting that motion vectors below an implied distance threshold with a closest cluster are classified into such cluster); and
classifying unclassified basic markings into the cluster if differences between offset vectors of the unclassified basic markings and an average value of offset vectors in the cluster (see Tokunaga [0086], where a distance between a local motion vector and a representative vector of each of a predetermined of clusters is calculated and the motion vectors are clustered into a cluster to which the closes vector belongs based on the obtained distance).
Regarding claim 4, please see the above rejection of claim 1. De Haan and Tokunaga disclose the image-based motion detection method of claim 1, wherein
the basic markings are image fragments, the operation of extracting image fragments specifically comprises: extracting images within a range of a first preset distance centered on the first detecting points to construct the image fragments (see De Haan [0070] and [0077], where the image frame is segmented into smaller segments and Harris corner detector is used to detect trackable interest points and a preset window is used to detect corner points).
Regarding claim 5, please see the above rejection of claim 4. De Haan and Tokunaga disclose the image-based motion detection method of claim 4, wherein
in response that the number of the first detecting points in the reference image is less than a fourth threshold, expanding detecting points (see De Haan [0076], where points suitable for tracking are located in the initial image of the interval and their trajectories are estimated for the entire interval; see De Haan [0091], where a minimal number of points within a range is used to classify tracked points), and the operation of expanding the detecting points comprises:
determining the fourth threshold (see De Haan [0091], where a minimal number of points within a range is used to classify tracked points);
centered on the first detecting points, determining several second detecting points within a range of a second preset distance (see De Haan [0066] and [0078], where a sequence of image frames acquired over time are obtained and remaining images are used to track the detected points in the initial frame, where a displacement vector between consecutive images is obtained for finding the trajectory of the tracked points);
determining an entropy threshold (see De Haan [0077], where a threshold is applied to the corner response measure);
centered on the second detecting points, extracting images in the range of the first preset distance in the reference image (see De Haan [0077], where the Harris corner detector is used to detect trackable interest points and a preset window is used to detect corner points);
saving images whose entropy is greater than the entropy threshold as expanded image fragments (see De Haan [0077], where points with a corner response greater than a certain value can be considered as corner points and suitable for tracking).
Regarding claim 6, please see the above rejection of claim 1. De Haan and Tokunaga disclose the image-based motion detection method of claim 1, wherein
the basic markings are image feature points, and the operation of extracting the image feature points specifically comprises: centered on the first detecting points, identifying the image feature points within a range of a third preset distance in the reference image (see De Haan [0070] and [0077], where the image frame is segmented into smaller segments and Harris corner detector is used to detect trackable interest points and a preset window is used to detect corner points).
Regarding claim 7, please see the above rejection of claim 6. De Haan and Tokunaga disclose the image-based motion detection method of claim 6, wherein the image feature points are Harris corner points (see De Haan [0070] and [0077], where the image frame is segmented into smaller segments and Harris corner detector is used to detect trackable interest points).
Regarding claim 8, please see the above rejection of claim 1. De Haan and Tokunaga disclose the image-based motion detection method of claim 1, wherein a density-based clustering algorithm is adopted to classify satisfied basic markings into a category (see De Haan [0091], where a DBSCAN clustering method is performed to separate the points into three different types using a density threshold into cluster core points, border points and noise points).
Regarding claim 9, please see the above rejection of claim 8. De Haan and Tokunaga disclose the image-based motion detection method of claim 8, wherein the clustering algorithm is density-based spatial clustering of applications with noise (DBSCAN) (see De Haan [0091], where a DBSCAN clustering method is performed to separate the points into three different types using a density threshold into cluster core points, border points and noise points).
Regarding claim 10, please see the above rejection of claim 1. De Haan and Tokunaga disclose the image-based motion detection method of claim 1, further comprising:
dividing the reference image to obtain different image dividing units and classifying the basic markings in a same image dividing unit into a same category (see De Haan [0070] and [0077], where the image frame is segmented into smaller segments and Harris corner detector is used to detect trackable interest points; see De Haan [0091], where a DBSCAN clustering method is performed to separate the points into three different types using a density threshold into cluster core points, border points and noise points; where points in the same segments with similar density conditions would be classified as a similar type).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY WING HO CHOI whose telephone number is (571)270-3814. The examiner can normally be reached 9:00 AM to 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VINCENT RUDOLPH can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TIMOTHY CHOI/Examiner, Art Unit 2671
/VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671