Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/02/2025 has been entered.
Response to Amendment
This action is in response to the communication filed on 10/02/2025.
The claims 1 and 9 are currently amended. Claims 1-16 are currently pending.
Response to Arguments
The issue stated in the Final Office Action dated 06/06/2025 under section 5 related to priority documents was not acknowledged by the applicant and the issue still persists, the examiner wanted to make note of that to the applicant for future responses.
Applicant’s arguments filed on 10/02/2025 on pages 7-15, under REMARKS with respect to 35
U.S.C. 102 and 35 U.S.C. 103 have been fully considered but they are not persuasive. Regarding claim 1
applicants on page 13 state that:
PNG
media_image1.png
874
708
media_image1.png
Greyscale
PNG
media_image2.png
475
699
media_image2.png
Greyscale
The examiner respectfully disagrees. The examiner would like to point out that the main idea of the applicant’s arguments are centered around the reference CHENG lacking a clustering algorithm. The examiner would also like to point out which was mentioned in the interview that the google definition of an unsupervised clustering algorithm is provided as “Clustering: Groups data points based on their similarities. For example, algorithms like K-Means or DBSCAN can cluster customer data based on purchasing behavior to enable targeted marketing”. This can be applied to image analysis by classifying/grouping images based on identifying image feature trends related to each oral area.
The examiner would now like to point to prior art reference CHENG (US 2022/0142739 A1), specifically paragraphs [0030-0031] which state first at paragraph [0030]: “the first algorithm may be a deep learning algorithm. In the embodiment, the positioning circuit 120 may determine which oral area the target image corresponds to, based on the deep learning algorithm and the information corresponding to each oral area (i.e. the machine learning result corresponding to each oral area) stored in the storage device 110, and generate the first position estimation result”. Further stating at paragraph [0031]: “the positioning circuit 120 may determine which oral area the target image corresponds to, based on the image comparison algorithm and the information corresponding to each oral area (i.e. the information contained in the dental image corresponding to each oral area) stored in the storage device 110, and generate the first position estimation result. Specifically, the positioning circuit 120 may compare the features of the target image with the features of the previously stored dental image corresponding to each oral area in order to find the oral area corresponding to the dental image which is the most similar to the target image, i.e. the first position estimation result is the oral area corresponding to the dental image which is the most similar to the target image”. Which based on both cited paragraphs clearly shows and states that the algorithm is a deep learning algorithm which teaches itself based on stored dental images (unsupervised learning), in order to classify captured images based on image feature similarity (acting as an image-based clustering algorithm). The computing system does this in order to provide a position estimation between one of sixteen oral areas but the invention is not limited to sixteen areas, where this estimation acts as a classification of the image into a group of images of the same area based on similar image features identified by the algorithm which although not stated acts and performs as a clustering algorithm.
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Taiwan on 03/18/2022. It is noted, however, that applicant has not filed a certified copy of the TW 111110045 application as required by 37 CFR 1.55.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 6-10, 14-16 are rejected under 35 § U.S.C. 102(a)(2) as being anticipated by US 2022/0142739 A1 to CHENG et al. (hereinafter “CHENG”).
As per claim 1, CHENG discloses an image classifying device (device 100 is an oral positioning device for where the dental/oral image is classified as belonging to one of areas 1-16 in fig 2; figs 1-2 & 5; abstract; paragraph [0023]), comprising: a storage device, storing information corresponding to a plurality of image classes (device 100 comprises a storage device 110 to store required files, information, and data for the oral positioning procedure; fig 1; paragraphs [0023-0025]), wherein images in each image class may be divided into different groups based on a clustering algorithm (each captured image is sorted into one of sixteen image classes each class representing an oral area being imaged and the captured images are sorted into each class based on image feature similarity determined by an image processing algorithm acting as a unsupervised clustering algorithm by classifying the captured images based on learned features from stored images corresponding to each oral area; paragraphs [0025], [0029], [0030-0031]), and wherein the information indicates the groups comprised in each image class (the information the algorithms use to perform similarity classification include comparing the features of the target image with the features of the reference image to calculate the distance shift pixels of the features of the images where the features may include distance from oral area to oral area along the tooth line; paragraphs [0030-0031], [0035-0036]); a calculation circuit, coupled to the storage device, obtaining a target image from an image extracting device and obtaining a feature vector of the target image (a positioning circuit 120 (calculation circuit) and a calculation circuit 130 (classification circuit) coupled to storage 110 and the system adapted to capture an image via oral image extracting device 200 which is moved to various positions of the oral cavity at various times to extract the images of the oral areas 1-16 where in moving the extraction device 200 comprises a motion vector having position and time features which are saved to the storage device 110 in relation to corresponding oral area 1-16; fig 1; paragraphs [0023], [0026]), wherein the calculation circuit obtains a first estimation result corresponding to the target image based on the information corresponding to the plurality of image classes and the feature vector (positioning circuit 120 obtains the first position estimation result found using positioning circuit 120 which determines which oral area the target image corresponds to, based on the deep learning algorithm and the information corresponding to each oral area including the machine learning result corresponding to each oral area stored in the storage device 110, and generate the first position estimation result; fig 5; paragraphs [0030], [0042]) and wherein the calculation circuit obtains a second estimation result corresponding to the target image based on a reference image, wherein the reference image corresponds to one of the plurality of image classes (the positioning circuit 120 obtains the second estimation result found using positioning circuit 120 to obtain a second position estimation result according to the information corresponding to the oral areas, a second algorithm and a reference image position of a reference image; fig 5; paragraphs [0033], [0042], [0054]); and a classifying circuit, coupled to the calculation circuit (wherein the calculation circuit 130 acts as the classifying circuit to sort the target images into the corresponding area; paragraphs [0041-0043]), wherein the classifying circuit adds the target image into one of the plurality of image classes based on the first estimation result and the second estimation result (the calculation circuit 130 then generates a third position estimation result corresponding to the target image according to the first position estimation result and the second position estimation result, so as to determine the oral area corresponding to the target image based on the third estimation result; paragraphs [0041-0043]).
As per claim 2, CHENG discloses the image classifying device of claim 1, wherein each image class comprises a plurality of groups of images (each area 1-16 of the oral areas corresponds to a group of oral images of that area; figs 2-3; paragraph [0034]).
As per claim 6, CHENG discloses the image classifying device of claim 1, wherein the classifying circuit multiplies the first estimation result by the second estimation result to obtain a third estimation result (the calculation circuit 130 multiplies the first position estimation result by the second position estimation result to generate the third position estimation result; paragraph [0043]), and adds the target image into one of the plurality of image classes based on the third estimation result (the calculation circuit 130 may multiply the first position estimation result by the second position estimation result to generate the third position estimation result for example, in the third position estimation result, the probability of the target image corresponding to the oral area 2 is 24% (40%*60%) and the probability of the target image corresponding to the oral area 3 is 12% (30%*40%), the user may determine that the target image corresponds to oral area 2 as such the oral-image extracting device 200 moves from the oral area 1 to the oral area 2 during the previous time point to the current time point according to the third position estimation result; paragraph [0043]).
As per claim 7, CHENG discloses the image classifying device of claim 1, wherein the classifying circuit multiplies the first estimation result by a first weighted value to generate a first result and multiplies the second estimation result by a second weighted value to generate a second result, and the classifying circuit adds the first result to the second result to generate a third estimation result and adds the target image into one of the plurality of image classes based on the third estimation result (the calculation circuit 130 may multiply the first position estimation result by a first weight to generate a first result, and multiply the second position estimation result by a second weight to generate a second result then, the calculation circuit 130 may add the first result to the second result in order to obtain the third estimation result and uses that estimation result to determine oral area number 1-16; paragraph [0044]).
As per claim 8, CHENG discloses the image classifying device of claim 1, wherein after the classifying circuit adds the target image into one of the plurality of image classes, the classifying circuit updates the information of the image class which the target image is added into (in the third position estimation result, the probability of the target image corresponding to the oral area 2 is 24% (40%*60%) and the probability of the target image corresponding to the oral area 3 is 12% (30%*40%) as such the user may determine that the target image corresponds to oral area 2 i.e. the oral-image extracting device 200 moves from the oral area 1 to the oral area 2 during the previous time point to the current time point according to the third position estimation result and this information corresponding to the target image is updated and stored in storage device 110 of the computing system; paragraphs [0024], [0043]).
As per claim 9, CHENG discloses an image classifying method (device 100 is an oral positioning device comprising a corresponding method for identifying where the dental/oral image is classified as belonging to one of areas 1-16 in fig 2; figs 1-2 & 5; abstract; paragraph [0023]), applied to an image classifying device, comprising: obtaining a target image from an image extracting device (via oral area positioning device 100 capturing a target image using image extraction device 200 connected to device 100; paragraph [0026]); obtaining, by a calculation circuit of the image classifying device, a feature vector of the target image (a positioning circuit 120 (calculation circuit) and a calculation circuit 130 (classification circuit) coupled to storage 110 and the system adapted to capture images via the oral image extracting device 200 which is moved to various positions of the oral cavity at various times to extract the images of the oral areas 1-16 where in moving the extraction device 200 comprises a motion vector having position and time features which are saved to the storage device 110 in relation to corresponding oral area 1-16; fig 1; paragraphs [0023], [0026]), wherein images in each image class maybe divided into different groups based on a clustering algorithm (each captured image is sorted into one of sixteen image classes each class representing an oral area being imaged and the captured images are sorted into each class based on image feature similarity determined by an image processing algorithm acting as a unsupervised clustering algorithm by classifying the captured images based on learned features from stored images corresponding to each oral area; paragraphs [0025], [0029], [0030-0031]), and wherein the information indicates the groups comprised in each image class (the information the algorithms use to perform similarity classification include comparing the features of the target image with the features of the reference image to calculate the distance shift pixels of the features of the images where the features may include distance from oral area to oral area along the tooth line; paragraphs [0030-0031], [0035-0036]); obtaining, by the calculation circuit, a first estimation result corresponding to the target image based on the information corresponding to the plurality of image classes and the feature vector (positioning circuit 120 obtains the first position estimation result found using positioning circuit 120 which determines which oral area the target image corresponds to, based on the deep learning algorithm and the information corresponding to each oral area including the machine learning result corresponding to each oral area stored in the storage device 110, and generate the first position estimation result; fig 5; paragraphs [0030], [0042]); obtaining, by the calculation circuit, a second estimation result corresponding to the target image based on a reference image, wherein the reference image corresponds to one of the plurality of image classes (the positioning circuit 120 obtains the second estimation result found using positioning circuit 120 to obtain a second position estimation result according to the information corresponding to the oral areas, a second algorithm and a reference image position of a reference image; fig 5; paragraphs [0033], [0042], [0054]); and adding, by a classifying circuit of the image classifying device (wherein the calculation circuit 130 acts as the classifying circuit to sort the target images into the corresponding area; paragraphs [0041-0043]), the target image into one of the plurality of image classes based on the first estimation result and the second estimation result (the calculation circuit 130 then generates a third position estimation result corresponding to the target image according to the first position estimation result and the second position estimation result, so as to determine the oral area corresponding to the target image based on the third estimation result; paragraphs [0041-0043]).
As per claim 10, CHENG discloses the image classifying method of claim 9, wherein each image class comprises a plurality of groups of images (each area 1-16 of the oral areas corresponds to a group of oral images of that area; figs 2-3; paragraph [0034]).
As per claim 14, CHENG discloses the image classifying method of claim 9, further comprising: multiplying, by the classifying circuit, the first estimation result by the second estimation result to obtain a third estimation result (the calculation circuit 130 multiplies the first position estimation result by the second position estimation result to generate the third position estimation result; paragraph [0043]); and adding by the classifying circuit, the target image into one of the plurality of image classes based on the third estimation result (the calculation circuit 130 may multiply the first position estimation result by the second position estimation result to generate the third position estimation result for example, in the third position estimation result, the probability of the target image corresponding to the oral area 2 is 24% (40%*60%) and the probability of the target image corresponding to the oral area 3 is 12% (30%*40%), the user may determine that the target image corresponds to oral area 2 as such the oral-image extracting device 200 moves from the oral area 1 to the oral area 2 during the previous time point to the current time point according to the third position estimation result; paragraph [0043]).
As per claim 15, CHENG discloses the image classifying method of claim 9, further comprising: multiplying, by the classifying circuit, the first estimation result by a first weighted value to generate a first result (the calculation circuit 130 may multiply the first position estimation result by a first weight to generate a first result; paragraph [0044]); multiplying, by the classifying circuit, the second estimation result by a second weighted value to generate a second result (and calculation circuit 130 multiply the second position estimation result by a second weight to generate a second result; paragraph [0044]); and adding, by the classifying circuit, the first result to the second result to generate a third estimation result (the calculation circuit 130 may add the first result to the second result in order to obtain the third estimation result; paragraph [0044]); and adding, by the classifying circuit, the target image into one of the plurality of image classes based on the third estimation result (calculation circuit 130 uses that third estimation result to determine oral area number 1-16 and adds the target image to the oral area group based on the third result as described in the example in [0044]; paragraph [0044]).
As per claim 16, CHENG discloses the image classifying method of claim 9, further comprising: after the classifying circuit adds the target image into one of the plurality of image classes, updating, by the classifying circuit, the information of the image class which the target image was added into (in the third position estimation result, the probability of the target image corresponding to the oral area 2 is 24% (40%*60%) and the probability of the target image corresponding to the oral area 3 is 12% (30%*40%) as such the user may determine that the target image corresponds to oral area 2 i.e. the oral-image extracting device 200 moves from the oral area 1 to the oral area 2 during the previous time point to the current time point according to the third position estimation result and this information corresponding to the target image is updated and stored in storage device 110 of the computing system; paragraphs [0024], [0043]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 3-5, and 11-13 are rejected under 35 § U.S.C. 103 as being obvious over US 2022/0142739 A1 to CHENG et al. (hereinafter “CHENG”) in view of US 2020/0193552 A1 to TURKELSON et al. (hereinafter “TURKELSON”).
As per claim 3, CHENG discloses the image classifying device of claim 2. CHENG fails to disclose wherein the calculation circuit calculates shortest distances between the feature vector and each image class based on the feature vector and each cluster centroid of each group of each image class.
TURKELSON discloses wherein the calculation circuit calculates shortest distances between the feature vector and each image class based on the feature vector and each cluster centroid of each group of each image class (modify an existing feature vector of the target in this example, representing the drill with a feature vector corresponding to a centroid of a cluster corresponding to the drill; paragraph [0028]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHENG to have a cluster centroid that corresponds to each group designation of TURKELSON reference. The Suggestion/motivation for doing so would have been to provide the ability to characterize the object based on both of feature vectors, which are expected to be relatively close in feature space (e.g., as measured by cosine distance, Minkowski distance, Euclidean distance, Mahalanobis distance, Manhattan distance, etc.) relative to feature vectors of other objects, based on a proximity between the original feature vector and the submitted feature vector being less than a threshold distance or more than a threshold distance from other feature vectors, or based on a cluster being determined with techniques like DB - SCAN, some embodiments may determine that the submitted photo depicts the same model drill and in some cases, that it depicts the drill at a novel angel relative to previously obtained images where the data processing techniques suggested by TURKELSON at paragraph [0028] used for the drill object identification would easily be translated to use in the oral area determination process described by CHENG. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine TURKELSON with CHENG to obtain the invention as specified in claim 3.
As per claim 4, CHENG in view of TURKELSON discloses the image classifying device of claim 3. CHENG fails to disclose wherein when a minimum value of the shortest distances between the feature vector and each image class is above a threshold, the calculation circuit abandons the target image.
TURKELSON discloses wherein when a minimum value of the shortest distances between the feature vector and each image class is above a threshold, the calculation circuit abandons the target image (proximity (distance) between the original feature vector and the submitted feature vector being less than a threshold distance or more than a threshold distance from other feature vectors, or based on a cluster being determined with techniques like DB – SCAN and based on being more than the threshold add the image , the feature vector , or both the image and the feature vector , to a training data set with a label identifying the drill to be used in a subsequent training operation by which a computer vision object recognition model is updated or otherwise formed; paragraph [0028]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHENG to have if a value exceeds the set threshold the target image is abandoned of TURKELSON reference. The Suggestion/motivation for doing so would have been to provide the ability to characterize the object based on both of feature vectors, which are expected to be relatively close in feature space (e.g., as measured by cosine distance, Minkowski distance, Euclidean distance, Mahalanobis distance, Manhattan distance, etc.) relative to feature vectors of other objects, based on a proximity between the original feature vector and the submitted feature vector being less than a threshold distance or more than a threshold distance from other feature vectors, or based on a cluster being determined with techniques like DB - SCAN, some embodiments may determine that the submitted photo depicts the same model drill and in some cases, that it depicts the drill at a novel angel relative to previously obtained images where the data processing techniques suggested by TURKELSON at paragraph [0028] used for the drill object identification would easily be translated to use in the oral area determination process described by CHENG. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine TURKELSON with CHENG to obtain the invention as specified in claim 4.
As per claim 5, CHENG in view of TURKELSON discloses the image classifying device of claim 3, and the calculation circuit calculates the first estimation result based on the shortest distances between the feature vector and each image class and a probability distribution algorithm (the positioning circuit 120 uses the second algorithm the HMM algorithm to obtain the second position estimation result corresponding to the target image according to the angle information between each oral area, the positioning circuit 120 may substitute the difference of the rotation angle variance and the angle information into a probability density function to generate a distribution diagram and an exponential probability distribution function provided after paragraph [0041]; paragraph [0041]). CHENG fails to disclose wherein when a minimum value of the shortest distances between the feature vector and each image class is not above a threshold, the calculation circuit calculates the first estimation result based on the shortest distances between the feature vector and each image class and a probability distribution algorithm.
TURKELSON discloses wherein when a minimum value of the shortest distances between the feature vector and each image class is not above a threshold (proximity (distance) between the original feature vector and the submitted feature vector being less than a threshold distance or more than a threshold distance from other feature vectors, or based on a cluster being determined with techniques like DB – SCAN; paragraph [0028]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHENG to have when the threshold is not exceeded a calculation is performed of TURKELSON reference. The Suggestion/motivation for doing so would have been to provide the ability to characterize the object based on both of feature vectors, which are expected to be relatively close in feature space (e.g., as measured by cosine distance, Minkowski distance, Euclidean distance, Mahalanobis distance, Manhattan distance, etc.) relative to feature vectors of other objects, based on a proximity between the original feature vector and the submitted feature vector being less than a threshold distance or more than a threshold distance from other feature vectors, or based on a cluster being determined with techniques like DB - SCAN, some embodiments may determine that the submitted photo depicts the same model drill and in some cases, that it depicts the drill at a novel angel relative to previously obtained images where the data processing techniques suggested by TURKELSON at paragraph [0028] used for the drill object identification would easily be translated to use in the oral area determination process described by CHENG. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine TURKELSON with CHENG to obtain the invention as specified in claim 5.
As per claim 11, CHENG discloses the image classifying method of claim 10. CHENG fails to disclose [further comprising: calculating, by the calculation circuit, shortest distances between the feature vector and each image class based on the feature vector and each cluster centroid of each group of each image class.
TURKELSON discloses further comprising: calculating, by the calculation circuit, shortest distances between the feature vector and each image class based on the feature vector and each cluster centroid of each group of each image class (modify an existing feature vector of the target in this example, representing the drill with a feature vector corresponding to a centroid of a cluster corresponding to the drill; paragraph [0028]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHENG to have a cluster centroid that corresponds to each group designation of TURKELSON reference. The Suggestion/motivation for doing so would have been to provide the ability to characterize the object based on both of feature vectors, which are expected to be relatively close in feature space (e.g., as measured by cosine distance, Minkowski distance, Euclidean distance, Mahalanobis distance, Manhattan distance, etc.) relative to feature vectors of other objects, based on a proximity between the original feature vector and the submitted feature vector being less than a threshold distance or more than a threshold distance from other feature vectors, or based on a cluster being determined with techniques like DB - SCAN, some embodiments may determine that the submitted photo depicts the same model drill and in some cases, that it depicts the drill at a novel angel relative to previously obtained images where the data processing techniques suggested by TURKELSON at paragraph [0028] used for the drill object identification would easily be translated to use in the oral area determination process described by CHENG. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine TURKELSON with CHENG to obtain the invention as specified in claim 11.
As per claim 12, CHENG in view of TURKELSON discloses the image classifying method of claim 11. CHENG fails to disclose further comprising: when a minimum value of the shortest distances between the feature vector and each image class is above a threshold, abandoning, by the calculation circuit, the target image.
TURKELSON discloses further comprising: when a minimum value of the shortest distances between the feature vector and each image class is above a threshold, abandoning, by the calculation circuit, the target image (proximity (distance) between the original feature vector and the submitted feature vector being less than a threshold distance or more than a threshold distance from other feature vectors, or based on a cluster being determined with techniques like DB – SCAN and based on being more than the threshold add the image , the feature vector , or both the image and the feature vector , to a training data set with a label identifying the drill to be used in a subsequent training operation by which a computer vision object recognition model is updated or otherwise formed; paragraph [0028]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHENG to have if a value exceeds the set threshold the target image is abandoned of TURKELSON reference. The Suggestion/motivation for doing so would have been to provide the ability to characterize the object based on both of feature vectors, which are expected to be relatively close in feature space (e.g., as measured by cosine distance, Minkowski distance, Euclidean distance, Mahalanobis distance, Manhattan distance, etc.) relative to feature vectors of other objects, based on a proximity between the original feature vector and the submitted feature vector being less than a threshold distance or more than a threshold distance from other feature vectors, or based on a cluster being determined with techniques like DB - SCAN, some embodiments may determine that the submitted photo depicts the same model drill and in some cases, that it depicts the drill at a novel angel relative to previously obtained images where the data processing techniques suggested by TURKELSON at paragraph [0028] used for the drill object identification would easily be translated to use in the oral area determination process described by CHENG. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine TURKELSON with CHENG to obtain the invention as specified in claim 12.
As per claim 13, CHENG in view of TURKELSON discloses the image classifying method of claim 11, and the calculation circuit calculates the first estimation result based on the shortest distances between the feature vector and each image class and a probability distribution algorithm (the positioning circuit 120 uses the second algorithm the HMM algorithm to obtain the second position estimation result corresponding to the target image according to the angle information between each oral area, the positioning circuit 120 may substitute the difference of the rotation angle variance and the angle information into a probability density function to generate a distribution diagram and an exponential probability distribution function provided after paragraph [0041]; paragraph [0041]). CHENG fails to disclose further comprising: when a minimum value of the shortest distances between the feature vector and each image class is not above a threshold, by the calculation circuit calculates the first estimation result based on the shortest distances between the feature vector and each image class and a probability distribution algorithm.
TURKELSON discloses further comprising: when a minimum value of the shortest distances between the feature vector and each image class is not above a threshold (proximity (distance) between the original feature vector and the submitted feature vector being less than a threshold distance or more than a threshold distance from other feature vectors, or based on a cluster being determined with techniques like DB – SCAN; paragraph [0028]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHENG to have when the threshold is not exceeded a calculation is performed of TURKELSON reference. The Suggestion/motivation for doing so would have been to provide the ability to characterize the object based on both of feature vectors, which are expected to be relatively close in feature space (e.g., as measured by cosine distance, Minkowski distance, Euclidean distance, Mahalanobis distance, Manhattan distance, etc.) relative to feature vectors of other objects, based on a proximity between the original feature vector and the submitted feature vector being less than a threshold distance or more than a threshold distance from other feature vectors, or based on a cluster being determined with techniques like DB - SCAN, some embodiments may determine that the submitted photo depicts the same model drill and in some cases, that it depicts the drill at a novel angel relative to previously obtained images where the data processing techniques suggested by TURKELSON at paragraph [0028] used for the drill object identification would easily be translated to use in the oral area determination process described by CHENG. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine TURKELSON with CHENG to obtain the invention as specified in claim 13.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000.
/Devin Dhooge/
USPTO Patent Examiner
Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677