Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 1, 19, and 20 are objected to because of the following informality: each of these claims recites generating “the virtual viewpoint image” which does not have antecedent basis, although the Examiner can understand the meaning, scope, and intention. The Examiner suggests amending the phrase to read --a virtual viewpoint image--. Appropriate correction is required.
Claim 18 is objected to because of the following informality: this claim recites the acronym “an NFT” which should be preceded by its generic terminology. The Examiner suggests amending the phrase to read --a non-fungible token (NFT)--.Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-13, 15, 16, 19, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kurz (U.S. Patent No. 11,544,865), referred herein as Kurz.
Regarding claim 1, Kurtz teaches an image processing apparatus comprising one or more memories storing instructions and one or more processors that execute the instructions to (figs 2 or 3; column 2, line 63 through column 3, line 2):
obtain a three-dimensional model of an object (column 11, lines 24-35 and 56-67; column 13, lines 57-60; column 13, line 65 through column 14, line 2; a 3D model of a subject person and surrounding environment is captured);
obtain information indicating a situation of the object (column 12, lines 12-18; column 13, line 65 through column 14, line 9; information is obtained indicating a situation of the subject person in the environment); and
generate the virtual viewpoint image of the object according to the three-dimensional model of the object and the information indicating the situation of the object (fig 5, virtual viewpoint depictions 516/520/522; column 12, lines 46-57; column 14, lines 10-32; virtual viewpoint depictions are generated showing the situation of the subject person in the environment).
Regarding claim 2, Kurtz teaches the image processing apparatus according to claim 1, wherein the one or more processors execute the instructions to: estimate a posture of the object based on captured images of the object captured by each of a plurality of cameras (column 11, lines 24-35 and 39-66; column 13, lines 55-67).
Regarding claim 3, Kurtz teaches the image processing apparatus according to claim 2, wherein the object is a person, and the one or more processors execute the instructions to estimate the posture of the object based on a position of a joint of the person (column 11, lines 24-35; column 13, line 57 through column 14, line 20).
Regarding claim 4, Kurtz teaches the image processing apparatus according to claim 1, wherein the one or more processors execute the instructions to: generate the information indicating the situation of the object based on a captured image of the object (column 11, lines 24-35; column 12, lines 12-18; column 13, lines 57-67; column 14, lines 5-20).
Regarding claim 5, Kurtz teaches the image processing apparatus according to claim 1, wherein the one or more processors execute the instructions to: generate the information indicating the situation of the object based on a result of determining a posture of the object (column 12, lines 12-18; column 14, lines 5-20).
Regarding claim 6, Kurtz teaches the image processing apparatus according to claim 1, wherein the information indicating the situation of the object is information calculated based on a change in a position, a posture, a shape, or an appearance of the object across a plurality of times (column 12, lines 12-28 and 58-67; column 14, lines 5-20 and 47-60).
Regarding claim 7, Kurtz teaches the image processing apparatus according to claim 1, wherein the information indicating the situation of the object is information indicating an event that has occurred in relation to the object (column 12, lines 12-28 and 58-67; column 14, lines 5-20 and 47-60; an event such as movement and/or posture changes is indicated).
Regarding claim 8, Kurtz teaches the image processing apparatus according to claim 1, wherein the information indicating the situation of the object is information indicating movement of the object, a change in a posture of the object, a change in a shape of the object, or a result of classifying the situation (column 12, lines 12-28 and 58-67; column 14, lines 5-20 and 47-60; movement and/or postures changes are indicated, and a result of classifying the posture is indicated).
Regarding claim 9, Kurtz teaches the image processing apparatus according to claim 1, wherein the information indicating the situation of the object is information indicating that a specific movement or posture change has occurred in the object (column 12, lines 12-28 and 58-67; column 14, lines 5-20 and 47-60; movement and/or postures changes are indicated).
Regarding claim 10, Kurtz teaches the image processing apparatus according to claim 1, wherein the information indicating the situation of the object is information indicating a result of analyzing movement of the object (column 12, lines 12-28 and 58-67; column 14, lines 5-20 and 47-60; the indication is the result of analyzing and classifying movement and posture of the object).
Regarding claim 11, Kurtz teaches the image processing apparatus according to claim 1, wherein the information indicating the situation of the object is information indicating a movement speed or an amount of deformation of a shape of the object (column 12, lines 12-28 and 58-67; column 14, lines 5-20 and 47-60; deformation of joint positions resulting from movement and/or posture changes is indicated).
Regarding claim 12, Kurtz teaches the image processing apparatus according to claim 1, wherein the information indicating the situation of the object is information specifying positions of the object in time series, or information indicating a posture of the object (column 12, lines 12-28 and 58-67; column 14, lines 5-20 and 47-60; posture and positions in time series are indicated).
Regarding claim 13, Kurtz teaches the image processing apparatus according to claim 1, wherein the one or more processors execute the instructions to: change a display pertaining to the object in the virtual viewpoint image in accordance with the information indicating the situation of the object (column 12, lines 46-57; column 13, lines 3-10 and 21-32; column 14, lines 10-32; visual cues are added to the display to indicate the situation of the subject person).
Regarding claim 15, Kurtz teaches the image processing apparatus according to claim 1, wherein the one or more processors execute the instructions to: add an object item in a periphery of the object in accordance with the information indicating the situation of the object (column 12, lines 46-57; column 13, lines 3-10 and 21-32; column 14, lines 10-32; visual cues and/or objects are added in the periphery of the subject person to indicate the situation).
Regarding claim 16, Kurtz teaches the image processing apparatus according to claim 1, wherein the one or more processors execute the instructions to: superimpose the information indicating the situation of the object on the virtual viewpoint image (column 12, lines 46-57; column 13, lines 3-10 and 21-32; column 14, lines 10-32; visual cues and/or objects are superimposed on the virtual viewpoint image to indicate the situation).
Regarding claim 19, the limitations of this claim substantially correspond to the limitations of claim 1; thus they are rejected on similar grounds.
Regarding claim 20, the limitations of this claim substantially correspond to the limitations of claim 1; thus they are rejected on similar grounds.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Kurz, in view of Zhang et al. (U.S. Patent Application Publication No. 2017/0064214), referred herein as Zhang.
Regarding claim 14, Kurz teaches the image processing apparatus according to claim 1, wherein the one or more processors execute the instructions to: change visual cues for the object in the virtual viewpoint image in accordance with the information indicating the situation of the object (column 12, lines 46-57; column 13, lines 3-10 and 21-32; column 14, lines 10-32; visual cues and/or objects are added to the virtual viewpoint image to indicate the situation).
Kurtz does not teach changing a size or a color of the object.
However, in a similar field of endeavor, Zhang teaches an apparatus configured to obtain a three-dimensional model of an object, obtain information indicating a situation of the object regarding changes in its posture and/or position, and generating a virtual viewpoint image according to the 3D model and the information indicating the situation (figs 3B and 20; paragraph 126; paragraphs 129 and 130; paragraph 134; paragraphs 136-138), wherein the apparatus is further configured to change a size or a color of the object in the virtual viewpoint image in accordance with the situation of the object (paragraph 307, lines 1-15).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the color or size change taught by Zhang with the visual cues of Kurz because this helps the subject person have a better understanding of how the posture needs to be adjusted, thereby increasing the usability and accuracy of the posture analysis procedures (see, for example, Zhang, paragraph 307, the last 10 lines).
Claims 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Kurz, in view of Shear et al. (U.S. Patent Application Publication No. 2023/0376581), referred herein as Shear.
Regarding claim 17, Kurz teaches the image processing apparatus according to claim 1, wherein the one or more processors execute the instructions to: determine whether a user corresponds to a display mode, and when the user is determined to correspond to the display mode, generate a virtual viewpoint image of the object, according to the information indicating the situation of the object, that corresponds to the display mode (column 12, lines 46-57; column 13, lines 3-10 and 21-32; column 14, lines 10-32; visual cues and/or objects are added to the virtual viewpoint image to indicate the situation, and can comprise various display modes such as virtual object addition, message indication, information superimposition, etc.).
Kurz does not teach determining whether a user has a right to the display mode.
However, in a similar field of endeavor, Shear teaches an apparatus configured to obtain a three-dimensional model of an object, and obtain information indicating a situation of the object regarding changes in its posture and/or position (paragraph 427, lines 1-9 and the last 11 lines; paragraph 433, lines 1-17; paragraph 459), wherein the apparatus is further configured to determine whether a user has a right to a display mode (paragraph 276, lines 1-14 and the last 18 lines; paragraph 282, the last 19 lines; paragraph 292).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the rights determination of Shear with the subject person analysis and display of Kurz because this facilitates the vetting and authorization of access to the subject person’s analyzed information, thereby providing a much higher degree of computing resource trustworthiness and increasing the accuracy, reliability, and security of the user’s situation evaluation (see, for example, Shear, paragraph 163; paragraph 178, lines 1-15; paragraph 433, lines 1-12).
Regarding claim 18, Kurz teaches the image processing apparatus according to claim 1, wherein the one or more processors execute the instructions to: determine whether a user corresponds to the information indicating the situation of the object, and when the user is determined to correspond to the information indicating the situation of the object, generate a virtual viewpoint image of the object according to the information indicating the situation of the object (column 12, lines 46-57; column 13, lines 3-10 and 21-32; column 14, lines 10-32).
Kurz does not teach determining whether a user owns an NFT corresponding to the information.
However, in a similar field of endeavor, Shear teaches an apparatus configured to obtain a three-dimensional model of an object, and obtain information indicating a situation of the object regarding changes in its posture and/or position (paragraph 427, lines 1-9 and the last 11 lines; paragraph 433, lines 1-17; paragraph 459), wherein the apparatus is further configured to determine whether a user owns an NFT corresponding to the information (paragraph 162, lines 1-15; paragraph 1591).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the rights determination of Shear with the subject person analysis and display of Kurz because this facilitates the vetting and authorization of access to the subject person’s analyzed information, thereby providing a much higher degree of computing resource trustworthiness and increasing the accuracy, reliability, and security of the user’s situation evaluation (see, for example, Shear, paragraph 163; paragraph 178, lines 1-15; paragraph 433, lines 1-12).
Conclusion
The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Kim (U.S. Patent Application Publication No. 2013/0195330); Apparatus and method for estimating joint structure of human body.
Vinayak (U.S. Patent No. 9,383,895); Methods and systems for interactively producing shapes in three-dimensional space.
Ohba (U.S. Patent Application Publication No. 2017/0011519); Information processor and information processing method.
Utsunomiya (U.S. Patent Application Publication No. 2015/0310629); Motion information processing device.
Elwazer (U.S. Patent Application Publication No. 2017/0351910); Automatic body movement recognition and association system.
Yoshida (U.S. Patent Application Publication No. 2022/0395193); Height estimation apparatus, height estimation method, and non-transitory computer readable medium storing program.
Yoshida (U.S. Patent Application Publication No. 2022/0366716); Person state detection apparatus, person state detection method, and non-transitory computer readable medium storing program.
Yoshida (U.S. Patent Application Publication No. 2024/0087353); Image processing apparatus, image processing method, and non-transitory computer readable medium storing image processing program.
Baek (U.S. Patent Application Publication No. 2021/0158032); System, apparatus and method for recognizing motions of multiple users.
Kennewick, sr. (U.S. Patent Application Publication No. 2022/0383578); 3D avatar generation and robotic limbs using biomechanical analysis.
Elwazer (U.S. Patent Application Publication No. 2023/0177881); Automatic body movement recognition and association system including smoothing, segmentation, similarity, pooling, and dynamic modeling.
Yin (U.S. Patent Application Publication No. 2023/0237677); CPR posture evaluation model and system.
Shen (U.S. Patent Application Publication No. 2023/0316640); Image processing apparatus, image processing method, and storage medium.
Yang (U.S. Patent No. 12,424,027); Joint motion estimation based method for estimating continuous human postures.
Asayama (U.S. Patent Application Publication No. 2023/0316543); Skeleton estimation device, skeleton estimation method, and gymnastics scoring support system.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID T WELCH whose telephone number is (571)270-5364. The examiner can normally be reached on Monday-Thursday, 8:30-5:30 EST, and alternate Fridays, 9:00-2:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
DAVID T. WELCH
Primary Examiner
Art Unit 2613
/DAVID T WELCH/Primary Examiner, Art Unit 2613