DETAILED ACTION
Status of the Claims
The filing dated 8/22/24 is entered. Claims 1-15 are pending.
Foreign Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statements
The information disclosure statement (IDS) submitted on 8/22/24 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1 and 15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kamhi, US-20160284135.
In regards to claim 1, Kamhi discloses an information processing device (Par. 0001 computer) comprising: a scan controller (Fig. 2, 201 depth reconstruction module + 202 processing module) configured to cause a scanner to scan an outer appearance of an object disposed on a table and generate a scan image (Par. 0016-0018 scanning an object and outputting an image); a data generator (Fig. 2, 201 depth reconstruction module + 202 processing module) configured to generate 3-dimensional model data from the scan image (Par. 0016-0018 generating a 3d model from a scanned object/image); a video generator (Fig. 2, 204 animation module) configured to generate a video in which a 3-dimensional model based on the 3-dimensional model data is disposed in a virtual space (Par. 0024 generating a 3d animated image of the object); and a receiver (Fig. 2, 203 rigging module) configured to receive a designation input of joint positions for the 3-dimensional model data (Par. 0021 rigging module receives annotated joint information from scanning the object, i.e. a designation input of joint positions), wherein the data generator (Fig. 2, 201 depth reconstruction module + 202 processing module) is configured to generate the 3-dimensional model data in association with joint information on the joint positions designated by the designation input (Par. 0016-0021 generating a 3d model using rigging/joint information), and wherein the video generator (Fig. 2, 204 animation module) is configured to generate the video of the 3-dimensional model in which a joint of the 3-dimensional model data is operated based on the joint information (Par. 0022 generating a 3d animated image using rigging/joints).
In regards to claim 15, Kamhi discloses a non-transitory computer-readable storage medium (Par. 0013 “Computing device 100 further includes one or more processors 102, memory devices 104…”) that stores computer-executable program comprising instructions which (Par. 0039-0040 programs running on computer program products), when executed by a computer, cause a computer to function as: a scan controller (Fig. 2, 201 depth reconstruction module + 202 processing module) configured to cause a scanner to scan an outer appearance of an object disposed on a table and generate a scan image (Par. 0016-0018 scanning an object and outputting an image); a data generator (Fig. 2, 201 depth reconstruction module + 202 processing module) configured to generate 3-dimensional model data from the scan image (Par. 0016-0018 generating a 3d model from a scanned object/image); a video generator (Fig. 2, 204 animation module) configured to generate a video in which a 3-dimensional model based on the 3-dimensional model data is disposed in a virtual space (Par. 0024 generating a 3d animated image of the object); and a receiver (Fig. 2, 203 rigging module) configured to receive a designation input of joint positions for the 3-dimensional model data (Par. 0021 rigging module receives annotated joint information from scanning the object, i.e. a designation input of joint positions), wherein the data generator (Fig. 2, 201 depth reconstruction module + 202 processing module) is configured to generate the 3-dimensional model data in association with joint information on the joint positions designated by the designation input (Par. 0016-0021 generating a 3d model using rigging/joint information), and wherein the video generator (Fig. 2, 204 animation module) is configured to generate the video of the 3-dimensional model in which a joint of the 3-dimensional model data is operated based on the joint information (Par. 0022 generating a 3d animated image using rigging/joints).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kamhi, US-20160284135, in view of Jutan, US-20150022516.
In regards to claim 2, Kamhi discloses the receiver (Fig. 2, 203 rigging module)
Kamhi does not disclose expressly a manipulation receiver configured to allow a user of the information processing device to input a manipulation; and a display, wherein the receiver is configured to receive the designation input of the joint positions by the display superimposing and displaying an indicator including a plurality of indexes indicating joint positions on the 3-dimensional model data, and the manipulation receiver receiving a manipulation for matching positions of the plurality of indexes with positions of predetermined joints of the 3-dimensional model data.
Jutan discloses a manipulation receiver configured to allow a user of the information processing device to input a manipulation; and a display (Par. 0003 3-D computing environment), wherein the receiver is configured to receive the designation input of the joint positions by the display superimposing and displaying an indicator including a plurality of indexes indicating joint positions on the 3-dimensional model data (Par. 0032-0033 joints and bones of a 3d mesh are manipulated by a user; Fig. 13 and 14; Par. 0093-0098 manipulating joints of a rig associated with 3d model), and the manipulation receiver receiving a manipulation for matching positions of the plurality of indexes with positions of predetermined joints of the 3-dimensional model data (Par. 0032-0033 joints and bones of a 3d mesh are manipulated by a user; Fig. 13 and 14; Par. 0093-0098 manipulating joints of a rig associated with 3d model).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the 3d model and rigging created by Kamhi can be manipulated in the manner of Jutan. The motivation for doing so would have been to fine-tune the 3d model and rigging.
In regards to claim 3, Jutan, further, discloses the indicator is selected from different types of indicators in accordance with a type of the object (Par. 0048 using different types of rig elements/blocks selected from a library; Par. 0059-0061 manipulating blocks which are part of a humanoid).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the 3d model and rigging created by Kamhi can be manipulated in the manner of Jutan. The motivation for doing so would have been to fine-tune the 3d model and rigging.
In regards to claim 4, Jutan, further, discloses the indicator is commonly used irrespective of a difference in a type of the object (Par. 0048 using different types of rig elements/blocks selected from a library which include firearms and other types of objects; Par. 0059-0061 manipulating blocks).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the 3d model and rigging created by Kamhi can be manipulated in the manner of Jutan. The motivation for doing so would have been to fine-tune the 3d model and rigging.
Claim(s) 5-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kamhi, US-20160284135, and Jutan, US-20150022516, in view of Baran, "Automatic Rigging and Animation of 3D Characters".
In regards to claim 5, Kamhi and Jutan do not disclose expressly the data generator is configured to: determine embedding amounts of joint reference positions based on information regarding depths of the 3-dimensional model data at the joint positions designated by the designation input, each of the joint reference positions a being position at which a corresponding joint in the 3-dimensional model data is to be operated; determine the joint reference positions based on the joint positions and the embedding amounts, and associate the joint reference positions as the joint information with the 3-dimensional model data.
Baran discloses determine embedding amounts of joint reference positions based on information regarding depths of the 3-dimensional model data at the joint positions designated by the designation input, each of the joint reference positions a being position at which a corresponding joint in the 3-dimensional model data is to be operated; determine the joint reference positions based on the joint positions and the embedding amounts, and associate the joint reference positions as the joint information with the 3-dimensional model data (Section 3 discloses skeleton containing joints embedding using the 3d model).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art the skeleton/joint embedding of Kamhi can be done using the method of Baran. The motivation for doing so would have been it “allows a user to go from a static mesh to an animated character quickly and effortlessly” (Section 6).
In regards to claim 6, Baran further discloses the data generator is configured to set each of the joint reference positions in an intermediate part of a depth range at the corresponding joint in the 3-dimentional model data (Section 3 discloses skeleton containing joints embedding using the 3d model, which includes joints within an intermediate part of the 3d model).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art the skeleton/joint embedding of Kamhi can be done using the method of Baran. The motivation for doing so would have been it “allows a user to go from a static mesh to an animated character quickly and effortlessly” (Section 6).
In regards to claim 7, Baran further discloses the data generator is configured to determine associated operation ranges based on the joint reference positions, each of the associated operation ranges being a range to be operated in the 3-dimensional model data in conjunction with an operation of a joint in the 3-dimensional model data when the joint is operated in the 3-dimensional model data, and wherein the joint information further includes the associated operation ranges (Section 3 discloses skeleton containing joints embedding using the 3d model, which includes joints within an intermediate part of the 3d model, which is with the operation range, i.e. the interior of the model).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art the skeleton/joint embedding of Kamhi can be done using the method of Baran. The motivation for doing so would have been it “allows a user to go from a static mesh to an animated character quickly and effortlessly” (Section 6).
In regards to claim 8, Baran further discloses wherein the associated operation ranges includes a range specified as a range between the joint reference positions (Section 3 discloses skeleton containing joints embedding using the 3d model, which includes joints within an intermediate part of the 3d model, which is with the operation range, i.e. the interior of the model).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art the skeleton/joint embedding of Kamhi can be done using the method of Baran. The motivation for doing so would have been it “allows a user to go from a static mesh to an animated character quickly and effortlessly” (Section 6).
Claim(s) 9-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kamhi, US-20160284135, and Jutan, US-20150022516, in view of Finn, US-20140210856.
In regards to claim 9, Kamhi and Jutan, do not disclose expressly a plurality of markers are provided on a surface of a pedestal of the table on which the object is disposed, and the plurality of markers being configured such that positions and rotational directions thereof in a 3-dimensional space are acquirable, and wherein the data generator is configured to extract information regarding the plurality of markers from the scan image, specify a position of the table on which the object is disposed, and correct a position and a direction of the 3-dimensional model data based on the position of the table.
Finn discloses a plurality of markers are provided on a surface of a pedestal of the table on which the object is disposed, and the plurality of markers being configured such that positions and rotational directions thereof in a 3-dimensional space are acquirable, and wherein the data generator is configured to extract information regarding the plurality of markers from the scan image, specify a position of the table on which the object is disposed, and correct a position and a direction of the 3-dimensional model data based on the position of the table (Fig. 4A, 401,403, and 405; Par. 011 scanning an object with markers to place the 3d model in real world coordinates for AR).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the scanning of Kamhi can include markers in the manner of Finn. The motivation for doing so would have been to place the scanned object into real world coordinates for AR (Finn Par. 0011).
In regards to claim 10, Finn further discloses wherein, in a correction of the position and the direction of the 3-dimensional model data, the direction of the 3-dimensional model data is corrected to be oriented in a direction matching a front face of the indicator when the indicator is superimposed and displayed on the 3-dimensional model data (Fig. 4A, 401,403, and 405; Par. 011 scanning an object with markers to place the 3d model in real world coordinates for AR for object pose).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the scanning of Kamhi can include markers in the manner of Finn. The motivation for doing so would have been to place the scanned object into real world coordinates for AR (Finn Par. 0011).
In regards to claim 11, Finn further discloses the plurality of markers are Augment Reality markers (Fig. 4A, 401,403, and 405; Par. 011 scanning an object with markers to place the 3d model in real world coordinates for AR for object pose).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the scanning of Kamhi can include markers in the manner of Finn. The motivation for doing so would have been to place the scanned object into real world coordinates for AR (Finn Par. 0011).
Claim(s) 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kamhi, US-20160284135, and Jutan, US-20150022516, in view of Bruce, US- 8848201.
In regards to claim 12, Kamhi discloses a depth sensor configured to acquire depth information (Fig. 2, 211 depth sensor).
Kamhi and Jutan, do not disclose expressly a depth sensor configured to acquire depth information on a range in which the scanner performs scanning, wherein the scan controller is configured to control the position and the angle of the scanner based on the depth information acquired by the depth sensor.
Bruce discloses a depth sensor configured to acquire depth information on a range in which the scanner performs scanning (Col. 6, 53 " ... the 3D information may be obtained using a coarse, quick scan of the object ... may include depth and colour information" when read in view of Col. 6, 3-65, Col. 9, 57 Col. 10, 22, Col. 11, 61- Col. 12, 27), wherein the scan controller is configured to control the position and the angle of the scanner based on the depth information acquired by the depth sensor (col. 8, 1 "Given regions of interest, a plan for viewing the region(s) of interest and obtaining additional information may be determined... the instructions include instructions for positioning a scanning component to determine the additional information from one or more viewpoints." when read in view of Col. 6, 66 - Col. 7, 20, Col. 8, 1-34, Col. 8, 58 - Col. 9, 40, Col. 11, 61 - Col.12, 27).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the scanning of Kamhi can scanning with a table in the manner of Bruce. The motivation for doing so would have been to provide easier scanning of the object.
In regards to claim 13, Bruce further discloses the scan controller is configured to set a range including the object disposed on the table and the table and control the position and the angle of the scanner so that the set range is scanned (Col. 6, 53 " ... the 3D information may be obtained using a coarse, quick scan of the object ... may include depth and colour information" when read in view of Col. 6, 3-65, Col. 9, 57 Col. 10, 22, Col. 11, 61- Col. 12, 27; col. 8, 1 "Given regions of interest, a plan for viewing the region(s) of interest and obtaining additional information may be determined... the instructions include instructions for positioning a scanning component to determine the additional information from one or more viewpoints." when read in view of Col. 6, 66 - Col. 7, 20, Col. 8, 1-34, Col. 8, 58 - Col. 9, 40, Col. 11, 61 - Col.12, 27).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the scanning of Kamhi can scanning with a table in the manner of Bruce. The motivation for doing so would have been to provide easier scanning of the object.
In regards to claim 14, Bruce further discloses a depth sensor configured to acquire depth information on a range in which the scanner performs scanning, wherein the scan controller is configured to set a range including the object disposed on the table and the table and cause the scanner to only scan the set range(Col. 6, 53 " ... the 3D information may be obtained using a coarse, quick scan of the object ... may include depth and colour information" when read in view of Col. 6, 3-65, Col. 9, 57 Col. 10, 22, Col. 11, 61- Col. 12, 27; col. 8, 1 "Given regions of interest, a plan for viewing the region(s) of interest and obtaining additional information may be determined... the instructions include instructions for positioning a scanning component to determine the additional information from one or more viewpoints." when read in view of Col. 6, 66 - Col. 7, 20, Col. 8, 1-34, Col. 8, 58 - Col. 9, 40, Col. 11, 61 - Col.12, 27).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the scanning of Kamhi can scanning with a table in the manner of Bruce. The motivation for doing so would have been to provide easier scanning of the object.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CORY A ALMEIDA whose telephone number is (571)270-3143. The examiner can normally be reached M-Th 9AM-730PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nitin (Kumar) Patel can be reached at (571) 272-7677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CORY A ALMEIDA/Primary Examiner, Art Unit 2628 1/30/26