DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Status of Claims
Claims 1-17 are now pending.
Priority
Acknowledgement is made of applicant’s claim for foreign priority under 35 USC 119 (a)-(d) to application JP2022-066157 filed 04/13/2022. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. As such, the effective filing date of the application is 04/13/2022.
Joint Inventors
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Such claim limitation(s) is/are:
“Position/attitude estimation unit” in claims 1-3, 12 and 13. Support for the limitation is found in paragraph [0081] of applicant’s specification.
“Object shape estimation unit” in claims 1, 4-7, 9-11, 14 and 15. Support for the limitation is found in paragraph [0081] of applicant’s specification.
“Object shape determination unit” in claims 1, 12 and 13. Support for the limitation is found in paragraph [0081] of applicant’s specification.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Presented claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1, it recites:
“An information processing device comprising:
a position/attitude estimation unit that, taking each of points included in point cloud data generated based on a sensing result for a target object as contact points, estimates, for each of the points, candidates for a position and an attitude of a hand that grips the target object;
a target object shape estimation unit that estimates a shape of the target object based on a distribution of the candidates for the position and the attitude estimated for each of the points;
and a position/attitude determination unit that determines the position and the attitude of the hand gripping the target object based on the shape of the target object estimated.”
The limitations as drafted, under their broadest reasonable interpretation, cover performance of the limitations in the human mind. That is, other than reciting that the steps are implemented on “an information processing device,” nothing in the claim precludes the steps from practically being performed in the mind. For example, but for the “information processing device” language, the limitation of “taking each of points included in point cloud data generated based on a sensing result for a target object as contact points, estimates, for each of the points, candidates for a position and an attitude of a hand that grips the target object” in the context of this claim, given the broadest reasonable interpretation, encompasses the act of a user receiving point cloud data and mentally determining the best location to grip an object. Similarly, the limitations of “estimates a shape of the target object…” and “determines the position and the attitude of the hand…” can be performed mentally or through pen and paper calculations. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, claim 1 presents the additional elements of “an information processing device,” “a position/attitude estimation unit,” “a target object shape estimation unit,” and “a position/attitude determination unit.” The components are recited at a high-level of generality (i.e., as generic computer components performing a generic computer function) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional generic computer components to perform the claimed steps amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Dependent claims 2-16 are similarly rejected under 35 U.S.C. 101 because the
claimed invention is directed to an abstract idea without significantly more. The dependent claims have
been given the full two-part analysis including analyzing the additional limitations both individually and
in combination. The dependent claims, when analyzed individually, and in combination, are also held to
be patent ineligible under 35 U.S.C. 101. The additional recited limitations of the dependent claims fail
to establish that the claims do not recite an abstract idea because the additional recited limitations of
the dependent claims merely further narrow the abstract idea. Accordingly, these elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
Regarding claim 17, it recites:
“An information processing method performed by an arithmetic processing device, the information processing method comprising: estimating, having taken each of points included in point cloud data generated based on a sensing result for a target object as contact points, candidates for a position and an attitude of a hand that grips the target object, for each of the points; estimating a shape of the target object based on a distribution of the candidates for the position and the attitude estimated for each of the points; and determining the position and the attitude of the hand gripping the target object based on the shape of the target object estimated.”
The limitations as drafted, under their broadest reasonable interpretation, cover performance of the limitations in the human mind. That is, other than reciting that the steps are implemented on “arithmetic processing device,” nothing in the claim precludes the steps from practically being performed in the mind. For example, but for the “arithmetic processing device” language, the limitation of “estimating, having taken each of points included in point cloud data generated based on a sensing result for a target object as contact points, candidates for a position and an attitude of a hand that grips the target object, for each of the points,” in the context of this claim, given the broadest reasonable interpretation, encompasses the act of a user receiving point cloud data and mentally determining the best location to grip an object. Similarly, the limitations of “estimating a shape of the target object…” and “determining the position and the attitude of the hand…” can be performed mentally or through pen and paper calculations. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, claim 1 presents the additional element of an “arithmetic processing device.” The component is recited at a high-level of generality (i.e., as generic computer components performing a generic computer function) such that it amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional generic computer components to perform the claimed steps amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 3-6, 16 and 17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Nammoto (US 20180085923 A1), hereinafter Nammoto.
Regarding claim 1, Nammoto discloses:
An information processing device comprising:
a position/attitude estimation unit that, taking each of points included in point cloud data generated based on a sensing result for a target object as contact points, estimates, for each of the points, candidates for a position and an attitude of a hand that grips the target object (see at least [0048]: “Here, another robot control device (for example, related-art robot control device) that is different from the robot control device 30, for example, calculates the position and the attitude of the gripping unit H taken when the gripping unit H is made to grip the target object O, according to the shape of the target object O represented by the generated whole-circumference point cloud, and causes the position and the attitude thus calculated to coincide with the position and the attitude of the gripping unit H, and can then causes the gripping unit H to grip the target object O.”)
a target object shape estimation unit that estimates a shape of the target object based on a distribution of the candidates for the position and the attitude estimated for each of the points (see at least [0047]: “The robot control device 30 generates a whole-circumference point cloud which is a three-dimensional point cloud representing the shape of an entire surface visible from outside, of the target object O placed on the top surface of the workbench TB, based on the generated four three-dimensional point clouds. As a method for generating a three-dimensional point cloud representing the shape of the entire surface based on four three-dimensional point clouds representing the shapes of parts of the surface that are different from each other, a known method or a method yet to be developed may be used.”)
and a position/attitude determination unit that determines the position and the attitude of the hand gripping the target object based on the shape of the target object estimated (see at least [0048]: “Here, another robot control device (for example, related-art robot control device) that is different from the robot control device 30, for example, calculates the position and the attitude of the gripping unit H taken when the gripping unit H is made to grip the target object O, according to the shape of the target object O represented by the generated whole-circumference point cloud, and causes the position and the attitude thus calculated to coincide with the position and the attitude of the gripping unit H, and can then causes the gripping unit H to grip the target object O.”)
Regarding claim 3, Nammoto discloses:
The information processing device according to claim 2, wherein the position/attitude estimation unit further derives a confidence level for each of the candidates for the position and the attitude estimated (see at least [0079]: “The evaluation value calculation unit 46 specifies respective points of intersection between straight lines virtually dividing the area into the sub-areas, as a plurality of virtual gripping positions. The evaluation value calculation unit 46 may also be configured to specify a virtual gripping position by other methods. For example, the evaluation value calculation unit 46 may be configured to specify, based on an operation accepted from the user, one or more positions in the virtual space VR that are designated by that operation, as virtual gripping positions.”)
Regarding claim 4, Nammoto discloses:
The information processing device according to claim 3, wherein the target object shape estimation unit estimates the shape of the target object based on the distribution of the candidates for the position and the attitude for which the confidence level is at least a threshold (see at least [0047]: “The robot control device 30 generates a whole-circumference point cloud which is a three-dimensional point cloud representing the shape of an entire surface visible from outside, of the target object O placed on the top surface of the workbench TB, based on the generated four three-dimensional point clouds. As a method for generating a three-dimensional point cloud representing the shape of the entire surface based on four three-dimensional point clouds representing the shapes of parts of the surface that are different from each other, a known method or a method yet to be developed may be used.”)
Regarding claim 5, Nammoto discloses:
The information processing device according to claim 1, wherein the target object shape estimation unit estimates a distribution of a grip center of the target object based on the distribution of the candidates for the position and the attitude, and estimates the shape of the target object based on the distribution of the grip center of the target object (see at least [0094]: “The evaluation value calculation unit 46 calculates a normal vector based on a plurality of points forming the whole-circumference point cloud which are included in a first spherical area with a predetermined radius around the target first contact point candidate. Here, each first spherical area around each first contact point candidate is an example of one or more areas corresponding to the gripping unit of the robot. The plurality of points forming the whole-circumference point cloud which are included in each first spherical area is an example of a partial point cloud included in one or more areas corresponding to the gripping unit of the robot. The evaluation value calculation unit 46 calculates a cone of friction of the target first contact point candidate with the calculated normal vector serving as a center axis (direction of cone of friction), and approximates the calculated cone of friction as a polyhedron.”)
Regarding claim 6, Nammoto discloses:
The information processing device according to the information processing device according to wherein the target object shape estimation unit estimates the grip center of the target object based on a geometric shape of the hand in the candidates for the position and the attitude (see at least [0073]: “The virtual gripping unit coordinate system HC is a three-dimensional coordinate system corresponding to the gripping unit coordinate system. Therefore, the predetermined position on the virtual gripping unit VH is the position corresponding to the position of the center of gravity of the gripping unit H, and in this example, the position of the center of gravity of the virtual gripping unit VH.”)
Regarding claim 16, Nammoto discloses:
The information processing device according to claim 1, wherein the sensing result includes a sensing result from a range sensor (see at least [0031]: “Each of the image pickup units 11 to 14 is, for example, a stereo camera having a CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) or the like which converts condensed light into an electrical signal.”)
Regarding claim 17, Nammoto discloses:
An information processing method performed by an arithmetic processing device, the information processing method comprising:
estimating, having taken each of points included in point cloud data generated based on a sensing result for a target object as contact points, candidates for a position and an attitude of a hand that grips the target object, for each of the points (see at least [0048]: “Here, another robot control device (for example, related-art robot control device) that is different from the robot control device 30, for example, calculates the position and the attitude of the gripping unit H taken when the gripping unit H is made to grip the target object O, according to the shape of the target object O represented by the generated whole-circumference point cloud, and causes the position and the attitude thus calculated to coincide with the position and the attitude of the gripping unit H, and can then causes the gripping unit H to grip the target object O.”)
estimating a shape of the target object based on a distribution of the candidates for the position and the attitude estimated for each of the points (see at least [0047]: “The robot control device 30 generates a whole-circumference point cloud which is a three-dimensional point cloud representing the shape of an entire surface visible from outside, of the target object O placed on the top surface of the workbench TB, based on the generated four three-dimensional point clouds. As a method for generating a three-dimensional point cloud representing the shape of the entire surface based on four three-dimensional point clouds representing the shapes of parts of the surface that are different from each other, a known method or a method yet to be developed may be used.”)
and determining the position and the attitude of the hand gripping the target object based on the shape of the target object estimated (see at least [0048]: “Here, another robot control device (for example, related-art robot control device) that is different from the robot control device 30, for example, calculates the position and the attitude of the gripping unit H taken when the gripping unit H is made to grip the target object O, according to the shape of the target object O represented by the generated whole-circumference point cloud, and causes the position and the attitude thus calculated to coincide with the position and the attitude of the gripping unit H, and can then causes the gripping unit H to grip the target object O.”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Nammoto in view of Ku et al. (US 20220016765 A1), hereinafter Ku.
Regarding claim 2, Nammoto discloses the information processing device according to claim 1.
Nammoto does not explicitly disclose, but Ku, in an analogous field of endeavor, teaches wherein the position/attitude estimation unit estimates the candidates for the position and the attitude of the hand using machine learning (see at least [0073]: “S300 can include determining a grasp pose for each candidate grasp location (or a subset thereof). The grasp pose can be determined in 2D (e.g., based on the image alone), 3D (e.g., based on the point cloud and/or depth information from depth images), 6D (e.g., (x, y, z) position and orientations, determined based on the point cloud and/or depth information from depth images), and/or otherwise determined. The grasp pose can be calculated, learned (e.g., determined using a neural network, clustering algorithm, etc.), looked up (e.g., based on the object, based on the candidate grasp location on the object, etc.), and/or otherwise determined. The grasp pose can be the same or different for different end effectors.”)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Nammoto with the machine learning algorithm of Ku. This is because as stated within [0018]-[0021] of Ku’s disclosure: “First, variants of the system and method enable grasping an object from a bin of objects, wherein objects can be overlapping with other objects and in any random pose; from a shelf of objects; and/or from any other suitable grasping surface. Second, variants of the system and method reduce accumulated errors from pose estimation. Additionally, the system is capable of grasping objects that are deformable and have different appearances instead of grasping objects using item pose estimation and/or grasp annotations. Third, variants of the system and method enables object grasping for objects lacking a fixed appearance by using invariant features as inference features (e.g., instead of object poses or computer vision structural features)… Fourth, since variants of the method and system use invariant features for object grasping, the system and method can bypass intermediate 6D object pose estimation. For example, grasps can be computed without knowing or computing the 3D pose of the grasped object.”
Claims 7-15 are rejected under 35 U.S.C. 103 as being unpatentable over Nammoto in view of Owada et al. (US 20240371033 A1), hereinafter Owada.
Regarding claim 7, Nammoto discloses the information processing device according to claim 5.
Nammoto does not explicitly disclose, but Owada, in an analogous field of endeavor, teaches:
wherein the target object shape estimation unit derives an orthogonal basis of the distribution of the grip center of the target object through principal component analysis on the distribution of the grip center, and estimates the shape of the target object based on a distribution width for each point included in the point cloud data in each of vector directions of the orthogonal basis (see at least [0071]: “The identification processing execution unit 205 specifies a reference shape similar to each target object, based on the generated normal vectors and normal vector related information included in the feature information about the reference shape. Specifically, the identification processing execution unit 205 calculates a degree of similarity between at least one of a direction of the normal vectors generated for each plane and a histogram distribution, and at least one of a direction of normal vectors of a surface constituting the reference shape and a histogram distribution. The identification processing execution unit 205 specifies the reference shape similar to the target object included in the object region, based on the calculated degree of similarity.”)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Nammoto with the methods of Owada. This is because, as stated within [0024] of Owada’s disclosure: “The present disclosure is able to provide an object recognition device, an object recognition method, a non-transitory computer-readable medium, and an object recognition system that are able to accurately specify a position and pose of a target object.”
Regarding claim 8, the combination of Nammoto and Owada teaches the information processing device according to claim 7.
Nammoto does not explicitly disclose, but Owada, in an analogous field of endeavor, teaches:
wherein the orthogonal basis includes a first principal component vector, a second principal component vector, and a third principal component vector orthogonal to each other (see at least [0058]: “The point cloud acquisition unit 201 inputs the distance image being output from the sensor unit 110. The point cloud acquisition unit 201 converts the distance image from the camera coordinate system into the world coordinate system, and generates a three-dimensional point cloud in which each point indicates a position on the three-dimensional space. The three-dimensional point cloud is data representing a set of points indicated by using three-dimensional orthogonal coordinates in the world coordinate system. In other words, the three-dimensional point cloud is data representing a set of three-dimensional coordinates indicating a position on the three-dimensional space of each point included in a distance image.”)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Nammoto with the methods of Owada. This is because, as stated within [0024] of Owada’s disclosure: “The present disclosure is able to provide an object recognition device, an object recognition method, a non-transitory computer-readable medium, and an object recognition system that are able to accurately specify a position and pose of a target object.”
Regarding claim 9, the combination of Nammoto and Owada teaches the information processing device according to claim 7.
Nammoto does not explicitly disclose, but Owada, in an analogous field of endeavor, teaches:
wherein the target object shape estimation unit estimates the shape of the target object based on a magnitude relationship between: the distribution width for each point included in the point cloud data in each of the vector directions; and a maximum grip width of the hand (see at least [0071]: “The identification processing execution unit 205 specifies a reference shape similar to each target object, based on the generated normal vectors and normal vector related information included in the feature information about the reference shape. Specifically, the identification processing execution unit 205 calculates a degree of similarity between at least one of a direction of the normal vectors generated for each plane and a histogram distribution, and at least one of a direction of normal vectors of a surface constituting the reference shape and a histogram distribution. The identification processing execution unit 205 specifies the reference shape similar to the target object included in the object region, based on the calculated degree of similarity.”)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Nammoto with the methods of Owada. This is because, as stated within [0024] of Owada’s disclosure: “The present disclosure is able to provide an object recognition device, an object recognition method, a non-transitory computer-readable medium, and an object recognition system that are able to accurately specify a position and pose of a target object.”
Regarding claim 10, the combination of Nammoto and Owada teaches the information processing device according to claim 9.
Nammoto does not explicitly disclose, but Owada, in an analogous field of endeavor, teaches:
wherein the target object shape estimation unit estimates the shape of the target object by approximating the shape of the target object to a basic shape of any one of a sphere, a cylinder, or a rectangular plate (see at least [0046]: “The specification unit 2 specifies a reference shape similar to the target object, based on the three-dimensional point cloud included in the specified object region and feature information about the reference shape. The reference shape may include, for example, a cuboid, a cylinder, a sphere, and the like. The feature information may include normal vectors related information being related to normal vectors of a surface constituting the reference shape.”)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Nammoto with the methods of Owada. This is because, as stated within [0024] of Owada’s disclosure: “The present disclosure is able to provide an object recognition device, an object recognition method, a non-transitory computer-readable medium, and an object recognition system that are able to accurately specify a position and pose of a target object.”
Regarding claim 11, the combination of Nammoto and Owada teaches the information processing device according to claim 9.
Nammoto does not explicitly disclose, but Owada, in an analogous field of endeavor, teaches:
wherein the target object shape estimation unit estimates the shape of the target object by approximating the shape of the target object to any one of an ellipsoid, a cylinder having a radius that varies along a height direction, or a rectangular plate having a thickness that varies from region to region of a main surface (see at least [0046]: “The specification unit 2 specifies a reference shape similar to the target object, based on the three-dimensional point cloud included in the specified object region and feature information about the reference shape. The reference shape may include, for example, a cuboid, a cylinder, a sphere, and the like. The feature information may include normal vectors related information being related to normal vectors of a surface constituting the reference shape.”)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Nammoto with the methods of Owada. This is because, as stated within [0024] of Owada’s disclosure: “The present disclosure is able to provide an object recognition device, an object recognition method, a non-transitory computer-readable medium, and an object recognition system that are able to accurately specify a position and pose of a target object.”
Regarding claim 12, the combination of Nammoto and Owada teaches the information processing device according to claim 10.
Nammoto does not explicitly disclose, but Owada, in an analogous field of endeavor, teaches:
wherein the position/attitude determination unit determines the position and the attitude of the hand based on a constraint condition on a degree of freedom of the position and the attitude set for each of the basic shapes (see at least [0083]: “The position and pose derivation unit 204 approximates the target object included in the object region by the reference shape specified by the object identification unit 203, estimates an axis from the approximated reference shape, and estimates a pose of the target object, based on the estimated axis and a reference axis of the reference shape specified by the object identification unit 203. The reference axis is an axis constituting the reference shape specified by the object identification unit 203 when a virtual object of the reference shape is placed on the horizontal plane. The reference axis is predetermined for each reference shape. When the reference shape is a cylinder, the reference axis is in a center axis (pivot) direction. When the reference shape is a cuboid, the reference axis is in a normal direction of a surface having the largest area. The position and pose derivation unit 204 calculates an angle difference between the estimated axis and the reference axis, and estimates a pose of the target object by acquiring a roll angle, a pitch angle, and a yaw angle, based on the calculated angle difference. The position and pose derivation unit 204 records, for the target object having the pose estimated, the pose in association with the kind ID and the individual ID in the identification recording unit 206.”)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Nammoto with the methods of Owada. This is because, as stated within [0024] of Owada’s disclosure: “The present disclosure is able to provide an object recognition device, an object recognition method, a non-transitory computer-readable medium, and an object recognition system that are able to accurately specify a position and pose of a target object.”
Regarding claim 13, the combination of Nammoto and Owada teaches the information processing device according to claim 10.
Nammoto does not explicitly disclose, but Owada, in an analogous field of endeavor, teaches:
wherein when the shape of the target object is not approximated to the basic shape, the position/attitude determination unit determines the position and the attitude of the hand from among the candidates for the position and the attitude estimated by the position/attitude estimation unit (see at least [0162]: “The shape identification unit 608 specifies, for each color region specified by the color information identification unit 607, an object region including an RGB-D point cloud indicating a position of a surface of a target object, based on a distance between the RGB-D point clouds of the RGB-D point clouds included in the color region. Note that, when the color ID is not assigned to all the RGB-D point clouds, the shape identification unit 608 determines that an RGB image is not acquired, and performs processing similar to that of the object identification unit 203 according to the second example embodiment without considering color information.”)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Nammoto with the methods of Owada. This is because, as stated within [0024] of Owada’s disclosure: “The present disclosure is able to provide an object recognition device, an object recognition method, a non-transitory computer-readable medium, and an object recognition system that are able to accurately specify a position and pose of a target object.”
Regarding claim 14, the combination of Nammoto and Owada teaches the information processing device according to claim 7.
Nammoto does not explicitly disclose, but Owada, in an analogous field of endeavor, teaches:
wherein the target object shape estimation unit estimates the shape of the target object by fitting the distribution for each of the points included in the point cloud data of the target object to a predetermined shape model (see at least [0065]: “The reference shape includes a so-called primitive shape such as a cuboid, a cylinder, and a sphere, for example. The feature information includes, for example, normal vector related information indicating information being related to normal vectors of a surface constituting the reference shape. The normal vector related information is information including at least one of a reference direction of normal vectors of a surface constituting a reference shape and a reference histogram distribution of the normal vectors. The reference direction is a direction of the normal vectors of the surface constituting the reference shape. The reference histogram distribution is a histogram distribution of the normal vectors of the surface constituting the reference shape. Note that the feature information may include length information about each side constituting the surface of the reference shape. Further, the reference shape may include a geometric shape other than a cuboid, a cylinder, and a sphere. In the description below, the reference shape may be described as a primitive shape.”)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Nammoto with the methods of Owada. This is because, as stated within [0024] of Owada’s disclosure: “The present disclosure is able to provide an object recognition device, an object recognition method, a non-transitory computer-readable medium, and an object recognition system that are able to accurately specify a position and pose of a target object.”
Regarding claim 15, the combination of Nammoto and Owada teaches the information processing device according to claim 14.
Nammoto does not explicitly disclose, but Owada, in an analogous field of endeavor, teaches:
wherein the target object shape estimation unit fits the position and the attitude of the target object with a position and an attitude of the predetermined shape model by superimposing the orthogonal basis of the distribution of the grip center of the target object and the orthogonal basis of the predetermined shape model (see at least [0065]: “The reference shape includes a so-called primitive shape such as a cuboid, a cylinder, and a sphere, for example. The feature information includes, for example, normal vector related information indicating information being related to normal vectors of a surface constituting the reference shape. The normal vector related information is information including at least one of a reference direction of normal vectors of a surface constituting a reference shape and a reference histogram distribution of the normal vectors. The reference direction is a direction of the normal vectors of the surface constituting the reference shape. The reference histogram distribution is a histogram distribution of the normal vectors of the surface constituting the reference shape. Note that the feature information may include length information about each side constituting the surface of the reference shape. Further, the reference shape may include a geometric shape other than a cuboid, a cylinder, and a sphere. In the description below, the reference shape may be described as a primitive shape.”)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Nammoto with the methods of Owada. This is because, as stated within [0024] of Owada’s disclosure: “The present disclosure is able to provide an object recognition device, an object recognition method, a non-transitory computer-readable medium, and an object recognition system that are able to accurately specify a position and pose of a target object.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELIZABETH NELESKI whose telephone number is (571)272-6064. The examiner can normally be reached 10 - 6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THOMAS WORDEN can be reached at (571) 272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/E.R.N./Examiner, Art Unit 3658
/JASON HOLLOWAY/Primary Examiner, Art Unit 3658