Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1-5 are pending.
Response to Arguments
As an initial matter, 35 USC 112 sixth paragraph interpretation and the 35 USC 112 rejections have been withdrawn in view of applicant’s amendments. Furthermore, the 101 rejections of claims 1-3 have been withdrawn in view of applicant’s amendments.
Re the 35 USC 103 rejections, applicant’s arguments with respect to claim(s) 1-5 have been considered but are moot because the arguments are directed towards the newly amended claim limitations that change the scope of the claimed as a whole and are open to new grounds of rejection/interpretation.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vilcovsky et al. (US 20140226000, hereinafter “Vil”) and Wiesel et al. (US 20190050427).
Re claim 1, Vil teaches a three-dimensional model generation device comprising:
A memory that is configured to store computer executable instructions; and at least a one processor that is configured to execute the computer executable instructions to perform operations (see [0235], processor and memory).
Comprising: generating an upper-body three-dimensional model based on captured-image information on an upper body of a subject is acquired by an imaging device (see [0008-0009], body of a subject captured and creating 3d model/mask based on a camera(s)).
Analyzing gender of the subject (see [0017], gender of the user).
Analyzing clothing of the subject based on the captured image information ([0142] An example of a 2D input to the model generator is provided below, where it is desired to create a model of the user's blue shirt shown in FIG. 9. FIG. 10 is an example of a 2D model or mask of the shirt, without the color information. Instead, a grey scale mask of the selected object, in this case the shirt, is generated and can be used later with any applied color. The texture of the shirt is preserved in this manner, so it is relatively easy to manipulate the color or the texture or even change the boundary of the model to create a different object).
Based on the gender of the subject, identified based on the analyzing of the gender of the subject, and based on a type of clothing of the subject identified based on the analyzing of the clothing of the subject selecting a lower-body 3d model corresponding to the upper body of the subject from a plurality of lower body 3d models set previously (0226] Moreover, as also shown in 1705, since the system is able to identify the user and also calculate parameters of the user, e.g., weight, height, etc., the system may be able to access a database of available items that would be recommended to the user based on these parameters. More specifically, if the user has recorded two trials of two different shirt within the same session, the system can decipher that the user is interested in purchasing a shirt and make either alternative recommendations, i.e., different shirts, or complimentary recommendations, e.g., specific pants that go well with the tried on shirts. Also, since the system can identify the shirt and the brand of the shirt, it may be able to offer specific incentive from that manufacturer, as exemplified in 1706).
Vil does not explicitly teach analyzing gender of the subject based on the captured-image information.
However, Wiesel teaches analyzing gender of the subject based on the captured-image information ([0525] Optionally, the user may manually enter his gender, and/or his height, and/or other optional user parameters (e.g., weight; shirt size; pants size; or the like); and these optional parameters may further be utilized for enhancing or preparing the user of the image for virtual dressing of clothes or other products. Optionally, such data or parameters may be determined autonomously by the system, based on one or more other data; for example, based on the user's name (e.g., “Adam” indicating a male; “Eve” indicating a female), based on the user's appearance (e.g., identifying anatomical structure or features that are typical of a male, or a female); utilization of other objects to infer user height (e.g., if the photo also shows a standard-size item, such as an electrical socket on a wall, or a banana or an orange located on a shelf behind the user, or a published book whose dimensions may be obtained from an online source, or a smartphone or tablet whose dimensions may be known or estimated). Optionally, user height may be estimated by the system, based on a ratio between head-size and body-size of the user; or based on a ratio between head-size and size of other body organ or body part (e.g., arm; leg)) and (see [0539], generate top and lower-half of the body of the user).
Vil and Wiesel teaches claim 1. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Vil model generation system of analyzing gender of a subject to explicitly include analyzing based on captured image information, as taught by Wiesel, as the references are in the analogous art of analyzing input data for selecting 3d models. An advantage of the modification is that it achieves the result of explicitly analyzing captured image data to determine gender of a subject, thus circumventing the need for manually entering gender of a subject for image processing.
Claims 4-5 claim limitations in scope to claim 1 and is rejected for at least the reasons above.
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vilcovsky et al. (US 20140226000, hereinafter “Vil”) and Wiesel et al. (US 20190050427) and Takakuwa (JP 2021002087).
Re claim 2, Vil and Wiesel teaches claim 1. Vil and Wiesel do not explicitly teach further comprise: selecting a motion of the selected lower-body 3d model according to information indicating a situation of a virtual space.
However, Takakuwa teaches comprise: selecting a motion of the selected lower-body 3d model according to information indicating a situation of a virtual space ([0031] Here, as an example, it is assumed that a general distribution user is only permitted to perform mobile distribution by the information processing system 10 on the video distribution platform. As shown in FIG. 3, in mobile distribution, only the upper body of the character of the distribution user is displayed, and the lower body is not displayed. Therefore, the required 3D data of the character is only the upper body data), ([0034] As shown in FIG. 4, the whole body of the character of the distribution user is displayed in the studio distribution. Therefore, the required 3D data of the character is whole body data), ([0022] The information processing system according to the embodiment of the present invention is an information processing system that distributes a moving image including an animation of a character object generated based on a movement of a distribution user, and includes one or a plurality of computer processors), ([0026] In studio distribution, the movement of the entire body of the distribution user (actor) shall be reflected in the character in real time by shooting the marker attached to the distribution user with the camera installed in the studio and using known motion capture technology), ([0047] As an example, if the first part is the upper body, the second part is the lower body), ([0049] As an example, the second avatar information is 3D data of the whole body of the character generated based on the first part information which is the information about the upper body and the second part information about the lower body. And, as an example, this whole body 3D data is 3D data of the contents required for studio distribution), ([0081] For example, the avatar information of other parts may be automatically determined from the avatar data of the upper body, or the avatar information candidates of other parts that match the avatar data of the upper body may be presented to the player. As a result, it is expected to be used for Vtuber applications that support both studio distribution and mobile distribution), ([0094] The specific step S120 specifies the second part information regarding the second part of the character that is not included in the first avatar information, based on the first part information acquired in the acquisition step. The specific step S120 can be executed by the specific unit 420 described above).
Vil, Wiesel, and Takakuwa teaches claim 2. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Vil and Wiesel’s model generation system of 3d model generation to explicitly include selection a motion of the selected lower body 3d model according to information indicating a situation of a virtual space, as the references are in the analogous art of 3d model of a upper and lower body of a model. An advantage of the modification is that it achieves the result of explicitly selecting motion of lower model 3d models for improved visualization of a 3d model of a subject captured.
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vilcovsky et al. (US 20140226000, hereinafter “Vil”) and Wiesel et al. (US 20190050427) and Takakuwa (JP 2021002087) and Boulay et al. (“Posture Recognition with a 3D Human Model”).
Re claim 3, Vil, Wiesel and Takakuwa teaches claim 2. Takakuwa further teaches estimating a state of the subject by analyzing the upper body of the subject based on the captured-image information ([0026] In studio distribution, the movement of the entire body of the distribution user (actor) shall be reflected in the character in real time by shooting the marker attached to the distribution user with the camera installed in the studio and using known motion capture technology), ([0047] As an example, if the first part is the upper body, the second part is the lower body), ([0081] For example, the avatar information of other parts may be automatically determined from the avatar data of the upper body, or the avatar information candidates of other parts that match the avatar data of the upper body may be presented to the player. As a result, it is expected to be used for Vtuber applications that support both studio distribution and mobile distribution),
Selecting the motion of the lower-body 3d model based on the information indication the situation of the virtual space of the upper body that is estimated based on the estimating (([0031] Here, as an example, it is assumed that a general distribution user is only permitted to perform mobile distribution by the information processing system 10 on the video distribution platform. As shown in FIG. 3, in mobile distribution, only the upper body of the character of the distribution user is displayed, and the lower body is not displayed. Therefore, the required 3D data of the character is only the upper body data), ([0034] As shown in FIG. 4, the whole body of the character of the distribution user is displayed in the studio distribution. Therefore, the required 3D data of the character is whole body data), ([0047] As an example, if the first part is the upper body, the second part is the lower body), ([0081] For example, the avatar information of other parts may be automatically determined from the avatar data of the upper body, or the avatar information candidates of other parts that match the avatar data of the upper body may be presented to the player. As a result, it is expected to be used for Vtuber applications that support both studio distribution and mobile distribution), ([0094] The specific step S120 specifies the second part information regarding the second part of the character that is not included in the first avatar information, based on the first part information acquired in the acquisition step. The specific step S120 can be executed by the specific unit 420 described above), and ([0017] The information processing method of the present invention is an information processing method for distributing a moving image including an animation of a character object generated based on a movement of a distribution user, and is a first information processing method of a character to one or a plurality of computer processors. The acquisition step of acquiring the first avatar information including the first part information about the part, and the second of the characters not included in the first avatar information based on the first part information acquired in the acquisition step. A second avatar information is generated based on the specific step for specifying the second part information regarding the part, the first part information acquired in the acquisition step, and the second part information specified in the specific step. It is characterized in that the generation step to be performed is executed).
For motivation, see claim 2.
Vil, Wiesel and Takakuwa do not explicitly teach estimating a state of the subject by analyzing a posture of the upper body of the subject based on the captured-image information and wherein selecting the motion of the lower-body 3d model based on the posture of the upper body that is estimated by the captured-image information.
However, Boulay teaches estimating a state of the subject by analyzing a posture of the upper body of the subject based on the captured-image information and selecting the motion of the lower-body 3d model based on the posture of the upper body that is estimated by the captured-image information (see p. 3, in reference to figure 1 and 2, wherein a posture of at least an upper body of a subject is analyzed based on captured image information of the upper body’s pose/posture, such as standing, sitting, bending, and lying postures/poses).
Vil, Wiesel, Takakuwa, and Boulay teaches claim 3. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Vil and Wiesel and Takakuwa’s model generation system of 3d model generation to explicitly include capturing image information on an upper body of a subject that is acquired by an imaging device to capture posture data for analysis, as taught by Boulay, as the references are in the analogous art of 3d model generation of a plurality of parts of a body. An advantage of the modification is that it achieves the result of explicitly using an imaging device to capture at least an upper body of a subject including different upper-body postures for 3d image processing and 3d animation.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Peter Hoang whose telephone number is (571)270-1346. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hajnik F. Daniel can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PETER HOANG/ Primary Examiner, Art Unit 2616