DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 20040175039 A1) in view of Adachi et al (US 20220086413 A1).
Regarding claim 1, Miller discloses a feature setting apparatus ([0007] automated method and system for generating an optimal 3D model of a target multifeatured object when only partial source) comprising:
at least one memory storing instructions ([0125] main memory contains a group of modules that control the operation of the CPU); and
at least one processor configured to execute the instructions ([0123] system is directed by a central-processing unit ("CPU")) to:
perform control to display a first face image and a second face image ([0031] the source data of the target face is illustrated as a single 2D photograph of the target 3D face taken from an unknown viewpoint.);
receive a movement operation of a pointer ([0031] a viewpoint-invariant search is conducted in which each 3D avatar is notionally subjected to all possible rigid motions, and the features projected into 2D); and
in a case where the pointer on one of the first face image and the second face image moves ([0031] positions of the projected avatar features are compared to the feature positions in the target photograph.),
Aadachi discloses perform control to display a marker at a position on an other face image corresponding to a position of the pointer to be moved in conjunction with movement of the pointer ([0036] marker's coordinate calculating unit generates information regarding the coordinate parameter of the marker information to be displayed in the actual space based on the virtual camera parameters related to the virtual camera accumulated in the virtual-camera information holding unit and the three-dimensional space information (background information) held by the actual space information holding unit)
Miller and Adachi are combinable because they are from the same field of invention.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify automated generation of 3D model of Miller to include perform control to display a marker at a position on an other face image corresponding to a position of the pointer to be moved in conjunction with movement of the pointer as described by Adachi.
The motivation for doing so would have been to supporting an object in capturing an image used for generating a virtual viewpoint image. (Adachi, [0006]).
Therefore, it would have been obvious to combine Miller and Adachi to obtain the invention as specified in claim 1.
Regarding claim 2 , Miller is silent to wherein the processor is further configured to execute the instructions to perform control to display the marker in correspondence with a position of the pointer on an other face image by using movement of the pointer on one of the first face image and the second face image as a trigger.
Aadachi discloses wherein the processor is further configured to execute the instructions to perform control to display the marker in correspondence with a position of the pointer on an other face image by using movement of the pointer on one of the first face image and the second face image as a trigger ([0038] image processing apparatus and the marker output device, display parameters such as coordinate information, marker shape, color, size, and display information necessary for marker display.)
Miller and Adachi are combinable because they are from the same field of invention.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify automated generation of 3D model of Miller to include wherein the processor is further configured to execute the instructions to perform control to display the marker in correspondence with a position of the pointer on an other face image by using movement of the pointer on one of the first face image and the second face image as a trigger as described by Adachi.
The motivation for doing so would have been to supporting an object in capturing an image used for generating a virtual viewpoint image. (Adachi, [0006]).
Therefore, it would have been obvious to combine Miller and Adachi to obtain the invention as specified in claim 2.
Regarding claim 3 , Miller discloses wherein the processor is further configured to execute the instructions to perform control to display information indicating a positional relationship between a position pointed by the pointer and a component of a face ([0032] The mesh points of the avatar are then deformed in 3D to minimize the distances between the reverse-projected features of the photograph and the corresponding avatar features.).
Regarding claim 4 , Miller discloses wherein the information includes a distance between a position pointed by the pointer and a position of a predetermined component of the face ([0042] the distance metrics used to measure the quality of fit between the reverse projections of feature items from the source imagery)
Regarding claim 5 , Miller discloses wherein the information includes a line connecting a position pointed by the pointer and a position of a predetermined component of the face ([0037] user interface is provided which allows the user to identify feature points individually or to mark groups of points delineated by a spline curve, or to select a set of points forming a line.)
Regarding claim 6 , Miller discloses wherein the processor is further configured to execute the instructions to perform control to display the information for a plurality of predetermined components of the face ([0038] automated detection of feature items on the 2D source projection is performed by searching for specific features of a face, such as eyeballs, nostrils, and lips).
Regarding claim 7 , Miller discloses wherein the information includes a cross line centered on a position pointed by the pointer ([0041] these feature items will be points, curves, or subareas and the source projection will be a photograph on which the position of these items can be measured, either manually or automatically.).
Regarding claim 8, Miller discloses wherein the processor is further configured to execute the instructions to,in a case where the pointer exists in a portion having a drawing area smaller than that of other portions of a surface of a face among portions of the surface of the face, reduce a movement speed of the pointer as compared with a case where the pointer exists in the other portions ([0041] Since position in 3D space is a vector parameter, the MMSE estimate for translation position is closed form; when substituted back into the squared error function, it gives an explicit function in terms of only the rotations.).
Regarding claim 9, Miller discloses wherein the processor is further configured to execute the instructions to receive a change operation for a parameter for drawing the first face image, at least the first face image is drawn based on a three-dimensional model of a face ([0041] achieved by calculating the position of the avatar in 3D space that best matches the set of selected feature items in the 2D source projection),
the parameter includes at least one of a parameter for controlling rotation of the three-dimensional model, a parameter for setting a viewpoint for drawing the three- dimensional model as the first face image, or a parameter for setting an arrangement position of the face in the first face image, ([0041] position calculation may be based on the computation of the conditional mean estimate of the reverse projection positions in 3D of the 2D feature items, followed by the computation of MMSE estimates for the rotation and translation parameters in 3D, given the estimates of the 3D positions of the feature items) and
the processor is further configured to execute the instructions to perform control to display the first face image drawn according to the parameter ([0125] ser interface generates words or graphical images on the display to prompt action by the user, and accepts commands from the keyboard and/or position-sensing device).
Regarding claim 10, Miller discloses wherein the first face image is an image of a face facing a first direction, and the second face image is an image of a face facing a second direction ([0064] Once the rigid motion (i.e., rotation and translation) that results in the best fit between 2D source imagery and a selected 3D avatar is determined, the 3D avatar may be deformed in order to improve its correspondence with the source imagery).
Regarding claim 11, Miller discloses wherein the processor is further configured to execute the instructions to:
acquire an input face image, and perform control to display the input face image, the first face image, and the second face image. ([0037] a user interface is provided which allows the user to identify feature points individually or to mark groups of points delineated by a spline curve, or to select a set of points forming a line.)
Regarding claim 12, Miller discloses wherein the processor is further configured to execute the instructions to:
acquire an input face image; and estimate a direction in which a face of the input face image faces; and perform control to display the first face image with an estimated direction as the first direction ([0064] Once the rigid motion (i.e., rotation and translation) that results in the best fit between 2D source imagery and a selected 3D avatar is determined, the 3D avatar may be deformed in order to improve its correspondence with the source imagery).
Regarding claim 13, Miller discloses wherein the second face image is an image in which a texture of a three-dimensional model of a face is developed two-dimensionally ([0036] The feature items in the 2D source projection which are used for matching are selected by hand or via automated methods)
Regarding claim 14, Miller discloses A feature setting method ([0007] automated method and system for generating an optimal 3D model of a target multifeatured object when only partial source) comprising:
displaying a first face image and a second face image ([0031] the source data of the target face is illustrated as a single 2D photograph of the target 3D face taken from an unknown viewpoint.);
receiving a movement operation of a pointer ([0031] a viewpoint-invariant search is conducted in which each 3D avatar is notionally subjected to all possible rigid motions, and the features projected into 2D);
in a case where the pointer on one of the first face image and the second face image moves ([0031] positions of the projected avatar features are compared to the feature positions in the target photograph.); and
Aadachi discloses displaying a marker at a position on an other face image corresponding to a position of the pointer to be moved in conjunction with movement of the pointer ([0036] marker's coordinate calculating unit generates information regarding the coordinate parameter of the marker information to be displayed in the actual space based on the virtual camera parameters related to the virtual camera accumulated in the virtual-camera information holding unit and the three-dimensional space information (background information) held by the actual space information holding unit)
Miller and Adachi are combinable because they are from the same field of invention.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify automated generation of 3D model of Miller to include displaying a marker at a position on an other face image corresponding to a position of the pointer to be moved in conjunction with movement of the pointer as described by Adachi.
The motivation for doing so would have been to supporting an object in capturing an image used for generating a virtual viewpoint image. (Adachi, [0006]).
Therefore, it would have been obvious to combine Miller and Adachi to obtain the invention as specified in claim 14.
Regarding claim 15, Miller discloses A non-transitory computer readable medium having a program stored thereon for causing a computer to execute ([0125] main memory contains a group of modules that control the operation of the CPU) comprising:
a display control step of controlling to display a first face image and a second face image ([0031] the source data of the target face is illustrated as a single 2D photograph of the target 3D face taken from an unknown viewpoint.);
a movement operation reception step of receiving a movement operation of a pointer ([0031] a viewpoint-invariant search is conducted in which each 3D avatar is notionally subjected to all possible rigid motions, and the features projected into 2D);
wherein in the display control step, in a case where the pointer on one of the first face image and the second face image moves ([0031] positions of the projected avatar features are compared to the feature positions in the target photograph.); and
Aadachi discloses a marker is controlled to be displayed at a position on an other face image corresponding to a position of the pointer to be moved in conjunction with movement of the pointer ([0036] marker's coordinate calculating unit generates information regarding the coordinate parameter of the marker information to be displayed in the actual space based on the virtual camera parameters related to the virtual camera accumulated in the virtual-camera information holding unit and the three-dimensional space information (background information) held by the actual space information holding unit)
Miller and Adachi are combinable because they are from the same field of invention.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify automated generation of 3D model of Miller to includea marker is controlled to be displayed at a position on an other face image corresponding to a position of the pointer to be moved in conjunction with movement of the pointer as described by Adachi.
The motivation for doing so would have been to supporting an object in capturing an image used for generating a virtual viewpoint image. (Adachi, [0006]).
Therefore, it would have been obvious to combine Miller and Adachi to obtain the invention as specified in claim 15.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVANG I PATEL whose telephone number is (571)272-8964. The examiner can normally be reached on M-F 9-5am.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIVANG I PATEL/Primary Examiner, Art Unit 2615