DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities:
On page 1, delete lines 19-22. These sentences are a duplicate of lines 15-22 .
On page line 9, line 19, “In other some aspects” should read “In other aspects”.
The disclosure is objected to because it contains an embedded hyperlink and/or other form of browser-executable code (see page 2, line 27). Applicant is required to delete the embedded hyperlink and/or other form of browser-executable code; references to websites should be limited to the top-level domain name without any prefix such as http:// or other browser-executable code. See MPEP § 608.01.
The spacing of the lines of the specification is such as to make reading difficult. New application papers with lines 1 1/2 or double spaced (see 37 CFR 1.52(b)(2)) on good quality paper are required.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
An “obtain module” in Claim 41
A “localize module” in Claim 41
A “select module” in Claim 41
An “align module” in Claim 41
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Each module in Claim 41 are being interpreted as parts of a computer program or software, as discussed on page 14, lines 25-33 and page 15, lines 1-5 of the instant application.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 23-25, 27-30, 32, and 40-42 are rejected under 35 U.S.C. 103 as being unpatentable over Aflalo et al. (US Pub No 20190188872) in view of Bao et al. (US Pub No 20220319046), hereinafter Bao.
As to Claim 23, Aflalo teaches method for aligning point clouds (PCs) of a physical object, the method being performed by an image processing device (see paragraph [0012], “An image processing method and apparatus may employ depth-based weighting in an iterative closest point (ICP) process to generate a coordinate-transformed point cloud”), the method comprising:
obtaining PCs (see paragraph [0013], “first and second point clouds representing respective images of a scene/object from different viewpoints are obtained”),
each comprising data points (see paragraph [0035], "A point cloud is generally defined as a set of data points in some coordinate system")
and each being generated from a respective set of two-dimensional (2D) digital images captured of the physical object (see paragraph [0035], "First camera C1 may capture a first image of a scene including one or more is objects O….In embodiments of the present technology, each point of the point cloud represents an image element such as a pixel", where a pixel is a 2D image element, and see paragraph [0036], "If first camera C1 is a stereo camera, it captures both a left image and a right image of the scene", where the left and right image are the set of 2D images);
localizing, in the PCs, data points that correspond to feature points (see paragraph [0052], "Referring still to FIG. 2, a feature points extraction process is performed", and see paragraph [0055], "More specifically, in an original image I from where a first point cloud P is extracted, the method may first find features points fi using a SIFT key-point detector", where SIFT is an algorithm used to extract features such as edges),
selecting at least one of the localized data points in each of the PCs as a reference point (see paragraph [0056], "The number of matching feature points, which are subsequently used in the ICP initialization for initial alignment of the point clouds, is typically several orders of magnitude smaller than the number of points in each point cloud. For instance, in an example, the initialization may only use 10-20 feature points")
wherein the reference points across the PCs represent the same feature (see paragraph [0052], "The feature points extraction entails finding matching keypoints between the first and second point clouds (i.e., between two frames)");
and aligning the PCs with each other by aligning the reference points across the PCs with each other (see paragraph [0057], "Thereafter, the process may find initial rotation R and initial translation t that minimizes distances between the matching feature points using a depth based weighting function").
Aflalo fails to explicitly teach the features detected in point the point clouds correspond to the visual features in the 2D digital images. However, Bao teaches a method of obtaining a 3D point cloud (see abstract) which extracts visual features from 2D images (see paragraph [0295], “A visual feature point refers to a feature point in the image that may be recognized….For example, the visual feature points may include feature points extracted using a histogram of oriented gradient (HOG), a scale-invariant feature transform (SIFT)”
and then maps these features to 3D point clouds (see paragraph [0316], “The visual feature points extracted from the visual positioning image and the preset 3D point cloud map may be performed a 2D-3D matching, that is, 2D pixel points in the visual positioning image may be matched with 3D points”).
Bao is combinable with Aflalo since both are from the analogous field of 3D image analysis. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the 2D-3D matching taught by Bao with the point cloud alignment taught by Aflalo. The motivation for doing so would be to increase the accuracy and positioning of the 3D point cloud. Bao teaches in paragraph [0344], “due to the high identification degree of the feature point pairs with semantic annotations, the results obtained through pose calculation based on the solution set may be highly accurate, thereby achieving accurate positioning.” Thus, it would have been obvious to combine the 2D-3D feature matching taught by Bao with the point cloud registration technique taught by Bao.
As to Claim 24, Aflalo teaches localizing, in the PCs, data points that correspond to feature points (see paragraph [0052], "Referring still to FIG. 2, a feature points extraction process is performed", and see paragraph [0055], "More specifically, in an original image I from where a first point cloud P is extracted, the method may first find features points fi using a SIFT key-point detector"). However, Aflalo does not explicitly teach and mapping the feature points from the 2D digital images to the PCs to aid localizing the data points in the PCs that correspond to the feature points of the visual features comprised in the 2D digital images.
However, Bao teaches that feature points to 2D images can be matched to points in a point cloud (see paragraph [0316], “The visual feature points extracted from the visual positioning image and the preset 3D point cloud map may be performed a 2D-3D matching, that is, 2D pixel points in the visual positioning image may be matched with 3D points”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the 2D-3D matching taught by Bao with the point cloud registration taught by Aflalo. The motivation for doing so would be to increase the accuracy and positioning of the 3D point cloud as taught by Bao in paragraph [0344].
As to Claim 25, Aflalo in view of Bao teaches determining weighting values for the reference points (see Aflalo, paragraph [0057], "Thereafter, the process may find initial rotation R and initial translation t that minimizes distances between the matching feature points using a depth based weighting function")
whereby the reference points are weighted higher than any other data points in the PCs when the PCs are aligned with each other (see Aflalo, paragraph [0062], "Feature points that are known to be occluded by other image elements in at least one of the images I or I′ may be assigned lower weights in comparison to non-occluded points located at the same depths. In other words, the weighting function w(·) may be a smaller value in the case of an occlusion", where reference points are non-occluded points, which would be weighed higher in comparison to the occluded points).
As to Claim 27, Aflalo in view of Bao teaches aligning the PCs with each other comprises applying a per data point based registration algorithm to the PCs (see Aflalo, paragraph [0038], "For instance, the second point cloud may be coordinate-transformed based on the ICP processing", and see paragraph [0002], "Iterative Closest Point (ICP) is an algorithm employed to minimize the difference between two clouds of point").
As to Claim 28, Aflalo in view of Bao teaches wherein the per data point based registration algorithm involves subjecting the PCs to a transformation procedure (see Aflalo, paragraph [0038], "For instance, the second point cloud may be coordinate-transformed based on the ICP processing", where ICP is the per data point based registration algorithm).
As to Claim 29, Aflalo in view of Bao teaches wherein the per data point based registration algorithm is an iterative closest point, ICP, algorithm (see Aflalo, paragraph [0002], "Iterative Closest Point (ICP) is an algorithm employed to minimize the difference between two clouds of point"").
As to Claim 30, Aflalo in view of Bao teaches performing noise removal for each of the PCs before aligning the PCs with each other (see paragraph [0045], "Depth regularization may then be performed 104 on each of the first and second point clouds. This operation may serve to remove noise and implement an edge preserving smoothing of an input depth map associated with a point cloud").
As to Claim 32, Aflalo fails to explicitly teach the visual features comprised in the 2D digital images depict any of: edges, corners, parts, of the physical object. However, Bao teaches that the visual features can include parts of an object (see paragraph [0298], The feature point pair represents a pair of feature points composed of a visual feature point in the positioning image and a feature point in the corresponding semantic 3D point cloud map. The two feature points in this pair of feature points may indicate a same object or a same part of an object”, where the positioning image is 2D). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the visual features taught by Schroeter with the point cloud alignment method taught by Aflalo. The motivation for doing so would be to increase the accuracy and positioning of the 3D point cloud as taught by Bao in paragraph [0344].
As to Claim 40, Aflalo in view of Bao teaches an image processing device (see Aflalo Fig. 1, Image processing apparatus 10) for aligning point clouds (PCs),
the image processing device comprising processing circuitry (see Aflalo, paragraph [0071], “The processing of method 100 may be performed by at least one processor of image processing apparatus 10. The at least one processor may be dedicated hardware circuitry”),
the processing circuitry being configured to cause the image processing device to perform the same steps as disclosed in Claim 23. Thus, the rejection and rationale are analogous to that of Claim 23.
As to Claim 41, Aflalo in view of Bao teaches an image processing device (see Aflalo Fig. 1, Image processing apparatus 10) for aligning point clouds (PCs) of a physical object, the image processing device comprising
an obtain module (see Aflalo, paragraph [0077], “The above-described methods according to the present technology can be implemented in hardware, firmware or via the use of software or computer code that can be stored in a recording medium such as a CD ROM, RAM,”)
configured to obtain PCs (see Aflalo, paragraph [0013]),
each comprising data points and each being generated from a respective set of two-dimensional (2D) digital images captured of the physical object (see Aflalo, paragraph [0035])
a localize module (see Aflalo, paragraph [0077])
configured to localize, in the PCs, data points that correspond to feature points of visual features comprised in the 2D digital images (see Aflalo paragraph [0052] and see Bao, paragraph [0295]);
a select module (see Aflalo, paragraph [0077])
configured to select at least one of the localized data points in each of the PCs as a reference point (see Aflalo, paragraph [0056])
wherein the reference points across the PCs represent the same visual feature comprised in the 2D digital images (see Bao, paragraph [0316]) ;
and an align module (see Aflalo, paragraph [0077])
configured to align the PCs with each other by aligning the reference points across the PCs with each other (see Aflalo, paragraph [0057]).
As to Claim 42, Aflalo in view of Bao teaches a non-transitory computer readable medium storing a computer program (see Aflalo, paragraph [0075], “Such computer program instructions may be stored in a non-transitory computer readable medium” for aligning point clouds (PCs) of a physical object
which, when run on processing circuitry of an image processing device (see Aflalo, paragraph [0071]), causes the image processing device to perform the same method disclosed in Claim 23. Thus, the rejection and rationale are analogous to that of Claim 23.
Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Aflalo et al. (US Pub No 20190188872) in view of Bao et al (US Pub No 20220319046), and further in view of Schroeter (US Pub No 20230121226), hereinafter Schroeter.
As to Claim 26, Aflalo in view of Bao teaches weighting values are determined also for other data points than the reference points (see Aflalo, paragraph [0062], "Feature points that are known to be occluded by other image elements in at least one of the images I or I′ may be assigned lower weights in comparison to non-occluded points located at the same depths. In other words, the weighting function w(·) may be a smaller value in the case of an occlusion").
However, Aflalo in view of Bao fails to teach that data points representing edges and/or corners of the physical object are weighted higher. Aflalo teaches that weights are distributed based on whether they are occluded (see paragraph [0062]). However, Schroeter teaches a method of aligning point clouds (see paragraph [0007]) that assigns weights corresponding to edges higher than weights corresponding to surfaces (see paragraph [0138], “The respective weights may be based on respective geometric features that correspond to the respective clusters, such as one or more edges of a thin vertical structure such as a pole or a tree trunk” and see paragraph [0147], “For example, a higher weight may be assigned to a first cluster of points that corresponds to a pole of a street sign. Additionally, or alternatively, a lower weight may be assigned to a second cluster of points that corresponds to an interior portion of a face of the street sign”, where the pole of the street sign is an edge, and the interior portion of a face would be a surface).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the weights taught by Schroeter with the point cloud alignment method taught by Aflalo. The motivation for doing so would be to determine correspondences between the two point clouds. Schroeter teaches in paragraph [0031], “The geometric features of the object may assist in the determination of one or more correspondences between the first point cloud and a second point cloud.” Thus, it would have been obvious to combine the weighting based on geometric features with the teachings of Aflalo and Bao in order to obtain the invention as claimed in Claim 26.
Claims 34 – 39 are rejected under 35 U.S.C. 103 as being unpatentable over Aflalo et al. (US Pub No 20190188872) in view of Bao et al (US Pub No 20220319046), and further in view of Terry et al (US Pub No 20180041907), hereinafter Terry.
As to Claim 34, Aflalo in view of Bao fails to explicitly teach each of the sets of 2D digital images has been captured from a respective orbit around the physical object. However, Terry teaches a method of creating a 3D model for a telecommunications site (see abstract), which uses a unmanned aerial vehicle (UAV) to capture pictures by flying in an orbit around the telecommunications site (see Fig 8, flight path 802, and see paragraph [0130], " Once at a certain height and certain distance from the cell tower 12 and the cell site components 14, the UAV 50 can take a circular or 360-degree flight pattern about the cell tower 12, including flying up as well as around the cell tower 12 (denoted by line 804)." and see paragraph [0105], "During the flight, the UAV 50 is configured to take various photos of different aspects of the cell site 10 including the cell tower 12").
Terry is combinable with Aflalo and Bao because all three are from the analogous field of 3d image analysis. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the UAV taught by Terry with the point cloud alignment method taught by Aflalo and Bao. The motivation for doing so would be to increase efficiency and safety of cell tower inspections. Terry teaches in paragraph [0006], “ It would be advantageous to adapt a UAV to take pictures and provide systems and methods for accurate 3D modeling based thereon to again leverage the advantages of UAVs over tower climbers, i.e., safety, climbing speed and overall speed, cost, etc.” Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the UAV taught by Terry with the teachings of Aflalo and Bao in order to obtain the invention as claimed in Claim 34.
As to Claim 35, Aflalo fails to teach each of the sets of 2D digital images has been captured at a different point in time. However, Terry teaches that the sets of 2D images can be taken at several given times (see paragraph [014], "The UAV 50 can be configured to take pictures automatically at given intervals during the flight," where the intervals are at different points of time). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the image capture system by UAV taught by Terry with the point cloud generation system taught by Aflalo and Bao. The motivation for doing so would be to increase efficiency of cell site inspections by virtualization. Terry teaches in paragraph [0007], “With over 200,000 cell sites in the U.S., geographically distributed everywhere, site surveys can be expensive, time-consuming, and complex. The various parent applications associated herewith describe techniques to utilize UAVs to optimize and provide safer site surveys. It would also be advantageous to further optimize site surveys by minimizing travel through virtualization of the entire process.” Thus, it would have been obvious to combine the imaging system taught by Terry with the teachings of Aflalo and Bao in order to obtain the invention as claimed in Claim 35.
As to Claim 36, Aflalo in view of Bao fails to explicitly teach that the sets of 2D digital images has been captured from same distance, angle, and/or direction with respect to the physical object. However, Terry teaches that the same distance can be maintained for sets of images (see paragraph [0115], “The flight plan can be constrained to an optimum distance from the cell tower. The plurality of photographs can be obtained automatically during the flight plan while concurrently performing a cell site audit of the cell site”). Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the image capturing technique taught by Terry with the point cloud registration technique taught by Aflalo and Bao. The motivation for doing so would be to ensure that the phots are not taken too far or too close from the subject. Terry teaches in paragraph [0104], “It has also been determined that the UAV 50 should be flown at a certain distance based on its camera capabilities to obtain the optimal photos, i.e., not too close or too far from objects of interest.” Thus, it would have been obvious to combine the image capturing technique of Terry with the teachings of Aflalo and Bao in order to obtain the invention as claimed in Claim 36.
As to Claim 37, Aflalo fails to teach that the 2D digital images have been captured from an image capturing unit mounted on an unmanned aerial vehicle (UAV). However, Terry teaches a UAV that can be used to capture images (see Fig 4, Unmanned Aerial Vehicle 50, with camera 86, and see paragraph [0105], "During the flight, the UAV 50 is configured to take various photos of different aspects of the cell site 10 including the cell tower 12"). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the UAV taught by Terry with the point cloud registration method taught by Aflalo and Bao. The motivation for doing so would be to increase efficiency and safety of cell tower inspections, as taught by Terry in paragraph [0006]. Thus, it would have been obvious to combine the UAV taught by Terry with the teachings of Aflalo and Bao in order to obtain the invention as claimed in Claim 37.
As to Claim 38 Aflalo fails to teach the physical object is a telecommunications equipment. However, Terry teaches that the physical object can be a telecommunications site (see Fig 1, cell tower 12, and see abstract "Systems and method for creating, modifying, and utilizing a virtual 360-degree view of a telecommunications site obtaining data capture from the telecommunications site”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the telecommunications 3D model generation method taught by Terry with the point cloud generation system taught by Aflalo and Bao. The motivation for doing so would be to increase efficiency of cell site inspections by virtualization as taught by Terry in paragraph [0007]. Thus, it would have been obvious to combine the imaging system taught by Terry with the teachings of Aflalo and Bao in order to obtain the invention as claimed in Claim 38.
As to Claim 39, Aflalo fails to explicitly teach that the physical object is a building, a part of a building, or part of a building interior. However, Terry teaches that the physical object can be part of a building (see abstract, “Systems and method for creating, modifying, and utilizing a virtual 360-degree view of a telecommunications site obtaining data capture from the telecommunications site, wherein the data capture comprises one or more of photos and video; processing the data capture to create a three-dimensional (3D) model of the telecommunications site in a first state, buildings, and constructions therein”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the telecommunications 3D model generation method taught by Terry with the point cloud generation system taught by Aflalo and Bao. The motivation for doing so would be to increase efficiency of cell site inspections by virtualization as taught by Terry in paragraph [0007]. Thus, it would have been obvious to combine the imaging system taught by Terry with the teachings of Aflalo and Bao in order to obtain the invention as claimed in Claim 39.
Claims 31 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Aflalo et al. (US Pub No 20190188872) in view of Bao et al (US Pub No 20220319046), and further in view of Wu (US Pub No 20160171735), hereinafter Wu.
As to Claim 31, Aflalo in view of Bao fails to teach the noise removal is performed by a counting-based algorithm. Aflalo teaches an algorithm that reduces noise in depth maps by normalizing depth values (see paragraph [0048]), but does not teach that the count of points is reduced.
However, Wu teaches that points representing noise can be removed (see paragraph [0017], “The modification module 13 obtains a first set of remaining corresponding points and a second set of remaining corresponding points by deleting abnormal points (as hereinafter defined) in the first set of initial corresponding points and in the second set of initial corresponding points, according to a preset rule”, where the rule is the algorithm), .
Wu is combinable with Aflalo and Bao because all three are from the analogous field of image analysis and point clouds. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the noise removal taught by Wu with the teachings of Aflalo. The motivation for doing so would be reduce the time needed to join the point clouds. Wu teaches in paragraph [0016], “In order to initially join the first group of point cloud and the second group of point cloud quickly, the joining module 12 can filter the first group of point cloud and the second group of point cloud by removing points representing noise”. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the noise removal taught by Wu with the teachings of Aflalo and Bao in order to obtain the invention as claimed in Claim 31.
As to Claim 33, Aflalo in view of Bao fails to teach that wherein the reference points represent any of: a centroid, a corner, an edge, of the physical object.
However, Wu teaches a method of aligning point clouds (see abstract), that uses points associated with corner (see paragraph [0029], “The feature point matching method requires corresponding feature points to be acquired from the first group of point cloud and from the second group of point cloud by using a corner detection algorithm”). Wu also teaches a centroid (see paragraph [0018], “center of mass”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to corner reference points taught by Wu with the teachings of Bao and Aflalo. The motivation for doing so would be to reduce the complexity with joining point clouds. Wu teaches in paragraph [0003], “The groups of incomplete point cloud can be joined to generate a complete group of point cloud relating to the object. However, generating a single complete group of point cloud comprising all the groups is very complicated.” Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the corner detection taught by Wu with the teachings of Aflalo and Bao in order to obtain the invention as claimed in Claim 31.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOUMYA THOMAS whose telephone number is (571)272-8639. The examiner can normally be reached M-F 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.T./ Examiner, Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664