DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . It is responsive to the submission dated 09/03/2024. Claims 1-18 are presented for examination, of which claims 1, 17 and 18 are independent claims.
Information Disclosure Statement
2. The information disclosure statements (IDSs) submitted on 09/23/2024 are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner.
Claim Rejections - 35 USC § 101
3. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to a program including steps that do not have a physical or tangible form, such as a computer program per se.
For example, Claim 18 recites: “A program that causes a computer to execute processing including steps of performing motion capture of a subject predetermined on a basis of a captured video including the subject and distance information, and performing transparency processing of making the subject on the captured video invisible, and generates a composite video by compositing an avatar corresponding to the subject that performs a movement detected by the motion capture on the video obtained by the transparency processing on the captured video or compositing the avatar obtained by the transparency processing on the captured video”.
The present claim, as drafted, under their broadest reasonable interpretation, merely covers a set of programmed instructions, which is nothing more than just claiming the expressions of the programs or a program per se. Therefore, the steps that follow the citation “processing including steps of:” are computer program steps, which are non-statutory. Similarly, computer programs claimed as computer listings per se, i.e., the descriptions of the programs, are not physical “things.” They are neither computer components nor statutory processes, as they are not “acts” being performed. Such claimed computer programs do not define any structural and functional interrelationships between the computer program and other claimed elements of a computer which permit the computer program’s functionality to be realized. In contrast, a claimed non-transitory computer-readable medium encoded with a computer program is a computer element which defines structural and functional interrelationships between the computer program and the rest of the computer which permit the computer program’s functionality to be realized and is thus statutory. See Lowry, 32 F.3d at 1583-84, 32 USPQ2d at 1035. Accordingly, it is important to distinguish claims that define descriptive material per se from claims that define statutory inventions
Claim Rejections - 35 USC § 102
5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
6. Claims 1-3, 5-7, 9-10, 13-14, and 16-18 are rejected under 35 U.S.C. 102(a)(a1) as being anticipated by Suzuki et al. (US 20120306919).
Considering claim 1, Swann discloses an imaging system (item 1 fig. 1) including a subject motion detector (11) that performs motion capture of a subject (e.g., clothes) predetermined on the basis of a captured video (e.g., motion capture) including the subject and distance information ((e.g., depth information); see paras. 37-39 and 44-45), and
a data control unit (12, fig. 1) that performs transparency processing of making the subject on the captured video invisible (for example, Suzuki at para. 13 discloses: if the image of the clothes is to be replaced with an image of virtual clothes prepared beforehand and making up a virtual clothes region to be overlaid with the virtual clothes has a protruded region protruding from the virtual clothes region, then the image processing part performs a process of making the virtual clothes region coincide with the clothes region. Paras. 81-88 of Suzuki further teaches the virtual try-on system 1 identifies an upper-body clothes region in the user region image extracted from the user's image taken…, based on the user's skeleton information, the virtual try-on system 1 identifies that position of the taken image on which to overlay the virtual clothes to be tried on, and overlays the virtual clothes on the identified position of the user's image. It is assumed that the sequence in which the virtual clothes are overlaid for try-on purposes is predetermined or determined by the user's selecting operations…. If there exists a protruded region, … a first or a second protruded area adjustment process is carried out to make the upper-body clothes region coincide with the virtual clothes-overlaid region, ….. the first process involves expanding the virtual clothes circumferentially by an appropriate number of pixels until the virtual clothes-overlaid region covers the user's upper-body clothes region, so that the upper-body clothes region of the protruded region is replaced with the virtual clothes. See also paras. 91-99, wherein the image overlay processing corresponds with the transparency processing), and
generates a composite video by compositing an avatar corresponding to the subject that performs a movement detected by the motion capture on the video obtained by the transparency processing on the captured video or compositing the avatar obtained by the transparency processing on the captured video (for example, Suzuki discloses: …. virtual clothes are overlaid on the taken image in a manner to fit the user's motions, and the resulting image is displayed on the display part 13. See paras. 46-47 and 54-57. In this process, virtual clothes are overlaid on the image taken of the user by the imaging part 11 during the motion capture process, the taken image being one of which the three-dimensional positions of the user's joints are calculated. See paras. 81 and 89, wherein the display of the image taken which encompasses the overlay of the motion captured, including the calculated 3D joint positions or skeleton information, with the pose of the virtual clothes corresponds with the composited avatar).
As per claim 2, Suzuki discloses the data control unit extracts a subject region that is a region of the subject on the captured video on the basis of at least one of the captured video or the distance information, and composites a background video with the subject region extracted to make the subject invisible. See paras. 63 and 82-88.
As per claim 3, Suzuki discloses the data control unit generates the background video on the basis of the captured video imaged in advance, another captured video imaged by another imaging unit different from the imaging unit that images the captured video, a past frame of the captured video, or estimation processing based on the captured video. See paras. 63-66 and 83-88.
As per claims 5 and 6, Suzuki discloses the data control unit extracts a subject region that is a region of the subject on the captured video on the basis of at least one of the captured video or the distance information, and composites an arbitrary separate video with the subject region extracted to make the subject invisible, wherein the separate video includes a graphic video or an effect video (e.g., causing the display part 13 to display an overlaid image in which, upon extracting a user region from the user's image taken based on the background differencing technique, and making an upper-body clothes region to coincide with the virtual clothes-overlaid region, by replacing an upper-body clothes region of the protruded region with a background image that does not result in an awkward expression image that is not suitable for the occasion. See paras. 63-69 and 82-89 and 91-99).
As per claim 7, Suzuki discloses the data control unit extracts a subject region that is a region of the subject on the captured video on the basis of at least one of the captured video or the distance information (for example, Suzuki discloses: In this process, virtual clothes are overlaid on the image taken of the user by the imaging part 11 during the motion capture process, the taken image being one of which the three-dimensional positions of the user's joints are calculated….. the virtual try-on system 1 identifies an upper-body clothes region in the user region image extracted from the user's image taken. See paras. 81-82), and adjusts a size of the avatar to be composited with the subject region extracted or generates a video of the avatar with a background to be composited with the subject region extracted to make the subject invisible (e.g., If it is determined in step S44 that there exists a protruded region, the virtual try-on system 1 performs a protruded region adjustment process in which the protruded region is adjusted… to make the upper-body clothes region coincide with the virtual clothes-overlaid region…. More specifically, the first process involves expanding the virtual clothes circumferentially by an appropriate number of pixels until the virtual clothes-overlaid region covers the user's upper-body clothes region, so that the upper-body clothes region of the protruded region is replaced with the virtual clothes. The second process involves replacing the upper-body clothes region of the protruded region with a predetermined image such as a background image. See paras. 87-88).
As per claim 9, Suzuki discloses the data control unit performs different types of the transparency processing in accordance with a distance from an imaging position of the captured video to the subject (e.g., In the ensuing description, the information indicative of the positions of each of the user's joints acquired from the motion capture process may be generically referred to as the skeleton information where appropriate. See para. 78. If it is determined in step S44 that there exists a protruded region, the virtual try-on system 1 performs a protruded region adjustment process in which a first or a second protruded area adjustment process is carried out to make the upper-body clothes region coincide with the virtual clothes-overlaid region, the first process expanding the virtual clothes, the second closing narrowing the upper-body clothes region. More specifically, the first process involves expanding the virtual clothes circumferentially by an appropriate number of pixels until the virtual clothes-overlaid region covers the user's upper-body clothes region, so that the upper-body clothes region of the protruded region is replaced with the virtual clothes. The second process involves replacing the upper-body clothes region of the protruded region with a predetermined image such as a background image. See paras. 87-88).
As per claim 10, Suzuki discloses the data control unit temporarily stops recording or transmitting the composite video in a case where the motion capture or the transparency processing fails (for example, Suzuki discloses: Where the protruded region is found to exist while virtual clothes are being displayed overlaid on the image taken of the user, the virtual try-on system 1 prevents the awkward expression that may be observed when the virtual clothes are overlaid. See para. 106).
As per claim 13, Suzuki discloses the data control unit adjusts a display size of the avatar on the composite video to an arbitrary size. See paras. 87-88.
As per claim 14, Suzuki discloses the data control unit adjusts a display size of the avatar on the composite video to a size according to the distance from the imaging position of the captured video to the subject. See para. 91.
As per claim 16, Suzuki discloses the data control unit generates the composite video in which an arbitrary separate video is composited at a position of another subject different from the subject on the captured video (e.g., causing the display part 13 to display an overlaid image in which, upon making an upper-body clothes region to coincide with the virtual clothes-overlaid region, generating an image overlay by replacing the upper-body clothes region of the protruded region with a background image. See paras. 88-89).
The invention of claim 17 contains features that correspond in scope with the limitations recited claim 1. As the limitations of claim 1 were found anticipated by the teachings of Suzuki, it is readily apparent that the applied prior arts perform the underlying elements. As such, the limitations of claim 17 are, therefore, subject to rejections under the same rationale as claim 1.
The subject-matter of independent claim 18 corresponds in terms of a computer program to that of independent method claim 1, and the rationale raised above to reject the later also apply, mutatis mutandis, to the former.
Claim Rejections - 35 USC § 103
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 11 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Suzuki et al. (US 20120306919) in view of Hanai (US 20150145888).
Regarding claim 11, Suzuki fails to specifically teach a range of an imaging visual field of the distance information is wider than a range of an imaging visual field of the captured video, which is disclosed by Hanai.
Particularly, Hanai discloses an information processing method that superimposedly displays a virtual image on a captured image captured by a camera of a portable terminal held by a user according to changing of a relative position parameter and/or a relative angle parameter of a virtual image with respect to the camera exceeding or within a predetermined threshold. See paras. 16-21, 24-27, 98-107 and 302-305 and 375-380.
Therefore it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have further modified Suzuki by the teachings of providing a range of an imaging visual field of the distance information to be wider than a range of an imaging visual field of the captured video, in the same conventional manner as taught by Hanai and change the field of view of the camera based on said position or angle. The motivation would have been to generate and display a virtual image that represents a more natural AR image of the image captured.
Regarding claim 15, considering that in Suzuki, the user is able to select any arbitrary joint position in the pose image, including a joint position at the feet portion of the pose of the captured image, and is able to carry-out vertical or horizontal joint repositioning on the captured image until a comparable 3D skeleton image is found to match the positioning distances between joints in the pose image retrieved from the image dictionary (see figs. 7C to 7E and paras. 65-78); the ordinary skilled in the art before the effective filing date of the invention would have found it obvious to identify a joint position from feet portion of the pose image as a starting point to reposition the points vertically in a sequence until a match image is reached on which to be used to overlay the virtual clothes during the motion capture process. Such a technique, in the general knowledge, is far from the expected range of the user point selection and is considered normal design actions that the skilled person would do without an inventive step and would have been a predictable variation of the teachings in Suzuki and one of several straightforward possibilities in assisting the user operator to best identifying the protrude regions of the image that coincide with the virtual clothes-overlaid region, and provide a good viewing experience for the user.
As such, it is submitted that the process of specifying a position of a grounding point of the subject on the captured video on the basis of the distance information and composites the avatar with the grounding point as a starting point is an obvious variation of the image processing schemes of Suzuki.
Allowable Subject Matter
9. Claims 4, 8 and 12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, because the prior art of record fail to teach the imaging system according to claim 3 in which the data control unit generates a video of a region corresponding to a predetermined region in the background video on the basis of the past frame in a case where the past frame includes a video of a background corresponding to the predetermined region in the subject region, and the data control unit sets a predetermined separate video as a video of a region corresponding to the predetermined region in the background video in a case where the past frame does not include a video of a background corresponding to the predetermined region in the subject region (as recited in claim 4). The prior art of record fails to particularly teach the imaging system according to claim 1, wherein in a case where a region of the subject is not detected from the captured video and a region of the subject is detected from the distance information, the data control unit extracts a subject region that is a region of the subject on the captured video on the basis of only the distance information and performs the transparency processing (as recited in claim 8); and specifies a front-back positional relationship between the subject and another subject in a portion where the subject and the another subject on the captured video overlap each other on the basis of the distance information, and performs the transparency processing on the basis of a specification result of the front-back positional relationship (as recited in claim 12).
Conclusion
10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Fukushige (US 20220233956) discloses an image of an augmented reality space, in which a character is arranged with respect to an acquired image acquired by the imaging sensor, is displayed on the display unit, and the behavior of the character in the augmented reality space is produced on the basis of data transmitted from an external source, wherein the data is motion data and sound data that are input by a performer who plays live as the character.
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESNER SAJOUS whose telephone number is (571) 272-7791. The examiner can normally be reached on M-F 10:00 TO 7:30 (ET).
Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice or email the Examiner directly at wesner.sajous@uspto.gov.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached on 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WESNER SAJOUS/Primary Examiner, Art Unit 2612
WS
02/17/2026