Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments/Amendments
The amendment, filed 11/11/2025 in response to the Non-Final Office Action mailed on 08/12/2025 has been entered.
Claims 1, 4-6, and 9-10 are currently pending in U.S. Patent Application No. 18/454,076.
Applicant’s remarks filed 11/11/2025 have been fully considered and are responded to below.
The previous 35 U.S.C. 101 and 35 U.S.C. 112(b) rejections are removed in view of the Applicant remarks and amendments.
Regarding the prior art rejections under 35 U.S.C. 102 and 35 U.S.C. 103, the Applicant’s remarks have been fully considered but are moot because the new grounds of rejection regarding the amended limitation no longer relies on the combination of references presented in the Non-Final Rejection. A change in scope necessitated by the Applicant’s amendments has led to an updated search revealing new art.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 6 are rejected as being unpatentable over Fathi et al. (US 2020/0320327; hereinafter “Fathi”) in view of Thakkar (“Dominant colors in an image using k-means clustering”, https://medium.com/buzzrobot/dominant-colors-in-an-image-using-k-means-clustering-3c7af4622036; hereinafter “Thakkar”) in view of Troy (US 2022/0343595; hereinafter “Troy”).
Regarding Claim 1, Fathi discloses an object recognition method of a three-dimensional (3D) space, comprising (see Fig. 2A):
generating a three-dimensional (3D) space by a plurality of sensing points generated by scanning a space, wherein the three-dimensional space is in a form of a three-dimensional model ([0148], Fathi discloses obtaining 3D information in the form of a point cloud. The Examiner notes [0064-0069] where point cloud information is obtained from laser scan information (i.e., scanning a space). Furthermore, [0034] discloses that the 3D information can be provided from sources such as 3D models, CAD drawings, 3D vector models, etc., which are all sources which produce a 3D space.);
grouping the sensing points in the three-dimensional space according to position information in the 3D space and color information in a color space of the sensing points, to allocate the sensing points in the 3D space to a plurality of areas ([0075-0077], [0147], Fathi discloses a process of clustering 3D information based on Euclidean cluster extractions, which involves comparing distances with a threshold. The Examiner notes [0073] where Fathi discloses obtaining color information as a part of the 3D information, wherein Fathi specifically notes that object information (such as color) can be used to facilitate 3D clustering, specifically “coloration differences can facilitate segmentation of the plurality of 2D images and clustering of 3D information”.),
wherein the position information of each of the sensing points is determined by an echo generated by a scanning signal reflected from an object ([0068], Fathi discloses obtaining point clouds (i.e., sensing points) representing an object, wherein 3D information regarding the point cloud is obtained using time-of-flight imaging devices, wherein the time-of-flight imaging device computes distance (i.e., position information) based on a signal reflected off of an object.),
a grouping result is that two sensing points whose position information ([0075-0077], [0147], Fathi discloses a process of clustering 3D information based on techniques such as Euclidean cluster extractions, which associates points whose distance is less than a distance threshold.),
a position distance in the 3D space between the two sensing points is less than a first distance threshold ([0075-0077], Fathi teaches clustering based on Euclidean cluster extraction, which involves comparing a distance with a threshold and associating points to a cluster when the distance is less than the threshold.),
space ([0150-0151], Fathi discloses clustering the 3D information and associating the cluster with an object (i.e., 3D shape) in 3D space.);
([0049-0050], [0145], Fathi discloses obtaining a plurality of 2D images from an imaging device.):
recognizing the 2D images of each of the areas (Fig. 1, [0145], Fathi teaches selecting (i.e., recognizing) objects of interests from the image.); and
determining at least one object in the 3D space according to a recognized result of the 2D images of the areas ([0146], Fig. 1, Fathi teaches processing 2D and 3D information such that there is an established relationship between the 2D information (i.e., the selected object identified from the 2D image) and the 3D information (i.e., relating the 2D object to a 3D point cloud representing the object in the scene.).).
Fathi does not disclose the color information is intensity in the color space, a grouping result is that two sensing points whose color information is less than a distance threshold in the color space, a color difference in the color space between the two sensing points is less than a second distance threshold, positioning a virtual camera at a plurality of viewing positions in the 3D space, rotating the virtual camera around the reference axis as an axle center and capturing, through the virtual camera, the 2D images corresponding to a plurality of capturing directions towards the one of the areas.
Thakkar discloses the color information is intensity in the color space, a grouping result is that two sensing points whose color information is less than a distance threshold in the color space, a color difference in the color space between the two sensing points is less than a second distance threshold (Pages 4-9, Thakkar discloses applying a k-means algorithm to cluster colors within an image. The colors are in RGB color space (which is analogous to the claimed “color information is intensity” as the RGB values give the intensity of each color channel (red, blue, or green) between a value of 0 and 255), and the clustering (i.e., grouping result) is performed by grouping points based on a distance. The Examiner notes that the color difference between the center point of a cluster (e.g., Point A) and any point in that cluster (e.g., Point B) is less than a distance threshold, such that if the point B exceeded said threshold, point B would then be assigned to a different cluster.),
PNG
media_image1.png
426
1098
media_image1.png
Greyscale
Description of RGB values providing an intensity of color, obtained from https://www.w3schools.com/html/html_colors_rgb.asp#:~:text=Each%20parameter%20(red%2C%20green%2C,blue)%20are%20set%20to%200.
Fathi and Thakkar are considered to be analogous to the claimed invention as they are in the same field of processing and clustering information from an image. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Fathi such that the clustering process performed by Fathi also included the clustering based on color information as disclosed by Thakkar. The motivation for this combination being the ability to use different methods to cluster and associate points.
Fathi in view of Thakkar does not teach positioning a virtual camera at a plurality of viewing positions in the 3D space, rotating the virtual camera around the reference axis as an axle center and capturing, through the virtual camera, the 2D images corresponding to a plurality of capturing directions towards the one of the areas.
Troy discloses positioning a virtual camera at a plurality of viewing positions in the 3D space ([0028-0030], Troy disclosing positioning a virtual camera at multiple different 3D virtual positions within a 3D virtual environment.), rotating the virtual camera around the reference axis as an axle center and capturing, through the virtual camera, the 2D images corresponding to a plurality of capturing directions towards the one of the areas (Fig. 5, [0038-0039], Troy discloses positioning a virtual camera, and then capturing a plurality of images as the camera is rotated about an axis (see axis 508 in Fig. 5).).
Fathi, Thakkar, and Troy are considered to be analogous to the claimed invention as they are in the same field of processing information within a three-dimensional space. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Fathi in view of Thakkar such that the 2D images which are obtained by Fathi in view of Thakkar, are specifically obtained using the rotation of a virtual camera about an axis, as disclosed by Troy. The motivation for this combination being the ability to specifically manipulate a virtual camera in a virtual 3D space, which will give the ability to image a virtual 3D space from different angles.
Claim 6 is the apparatus claim corresponding to claim 1, and is similarly rejected (see [0160], Fig. 3, Fathi).
Claims 4 and 9 are rejected as being unpatentable over Fathi in view of Thakkar in view of Troy in view of Min et al. (US 2023/0394686; hereinafter “Min”).
Regarding Claim 4, Fathi in view of Thakkar in view of Troy teaches the object recognition method of the 3D space according to claim 1, wherein determining the at least one object in the 3D space according to the recognized result of the 2D images of the areas comprises ([0146], Fig. 1, Fathi teaches processing 2D and 3D information such that there is an established relationship between the 2D information and the 3D information.): (The Examiner notes here that the presented limitation is interpreted as determining (i.e., detecting, identifying, etc.) an object in an image obtained from a specific capturing direction. Fathi’s teaches associating 2D information from an image and 3D information from a 3D scene, and is not limited to the capturing direction in order to identify an object.).
Fathi in view of Thakkar in view of Troy does not explicitly teach determining the at least one object located in a first area according to the recognized result of the 2D images in the capturing directions of the first area in the areas.
Min discloses determining the at least one object located in a first area according to the recognized result of the 2D images in the capturing directions of the first area in the areas (Figs. 3-4, [0107-0112], Min discloses detecting an object in a specific field-of-view (i.e., capturing direction).).
Fathi, Thakkar, Troy, and Min are considered to be analogous to the claimed invention as they are in the same field of obtaining a plurality of images from a 3D scene. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Fathi in view of Thakkar in view of Troy such that it incorporates Min’s logic of identifying an object in same the capturing direction as the 2D image. The motivation for this combination being the ability to specifically determine a field-of-view in which an object can be identified.
Claim 9 is the apparatus claim corresponding to claim 4, and is similarly rejected (see [0160], Fig. 3, Fathi).
Claims 5 and 10 are rejected as being unpatentable over Fathi in view of Thakkar in view of Troy in view of Min in view of Datta et al. (US 2015/0379729; hereinafter “Datta”).
Regarding Claim 5, Fathi in view of Thakkar in view of Troy in view of Min teaches the object recognition method of the 3D space according to claim 4, wherein the at least one object comprises a first object and a second object, and determining the at least one object in the 3D space according to the recognized result of the 2D images of the areas comprises ([0145-0146], Fig. 1, Fathi teaches processing 2D and 3D information such that there is an established relationship between the 2D information (i.e., the selected object identified from the 2D image) and the 3D information (i.e., relating the 2D object to a 3D point cloud representing the object in the scene.). The Examiner notes that Fathi’s teaching include detecting one or more objects.):
Fathi in view of Thakkar in view of Troy in view of Min does not teach determining a probability of the first object and the second object in the first area being identical in response to the first object and the second object being detected in the 2D images in at least two of the capturing directions of the first area; comparing the probability with a probability threshold to obtain a compared result; and determining that the first object and the second object are identical according to the compared result.
Datta teaches determining a probability of the first object and the second object in the first area being identical in response to the first object and the second object being detected in the 2D images in at least two of the capturing directions of the first area; comparing the probability with a probability threshold to obtain a compared result; and determining that the first object and the second object are identical according to the compared result (Fig. 1, [0013-0018], Datta teaches a process of identifying a pairing of tracks (i.e., potentially identical objects subtracted from a scene image) obtained from images taken from two different field of views. A value of similarity is determined for the pair, and if a minimum similarity value is not met then it is deemed that the objects are not corresponding (i.e., not identical). The Examiner notes that the opposite of this statement is then true, in that if a calculated similarity is greater than the minimum, than the objects are likely the same.).
Fathi, Thakkar, Troy, Min, and Datta are considered to be analogous to the claimed invention as they are in the same field of obtaining a plurality of images from a 3D scene. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Fathi in view of in view of Thakkar in view of Troy in view of Min such that the plurality of images obtained by Fathi in view of in view of Thakkar in view of Troy in view of Min are processed based on the logic taught by Datta such that identical objects can be identified. The motivation for this combination being the ability to process (i.e., remove) objects which appear multiple times during the image taking process.
Claim 10 is the apparatus claim corresponding to claim 5, and is similarly rejected (see [0160], Fig. 3, Fathi).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PROMOTTO TAJRIAN ISLAM whose telephone number is (703)756-5584. The examiner can normally be reached Monday - Friday 8:30 am - 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PROMOTTO TAJRIAN ISLAM/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669