Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is in response to the communication filed on 08/15/2025.
The claims 1, 3, 6, and 8 are currently amended. Claims 2 and 7 are canceled. Claims 1, 3-6, 8-10 are pending.
Response to Arguments
Applicant’s arguments filed on 08/15/2025 on pages 5-9, under REMARKS with respect to 35
U.S.C. 102 claim rejections to claims 1-10 have been fully considered and are persuasive. The rejections to the claims have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of US 2021/0152751 A1 to HUANG et al. which was cited previously in the non-final office actions conclusion section under other references of record.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 1, 3-6, 8-10 are rejected under 35 § U.S.C. 103 as being obvious over US 2019/0116322 A1 to HOLZER et al. (hereinafter “HOLZER”) in view of US 2021/0152751 A1 to HUANG et al (hereinafter “HUANG”).
As per claim 1, HOLZER discloses a model training method configured to train an image identification model (a system and corresponding image processing method for training a learning model to identify and replace backgrounds of an image; abstract; paragraphs [0034], [0129]), wherein the model training method comprises: obtaining a first image (a computing system and corresponding method of operation for image identification relating to identifying a skeleton model of a user in order to perform background image replacement techniques and obtains a first input image via a camera; title; abstract; figs 1-2; paragraphs [0034], [0038], [0044]); receiving a user operation performed on the first image (the computing system is adapted to receive a user input relating to the image data collected by the on board image sensors and operated on a user interface to initiate processes such as skeleton tracking used to generate the outline of the users; paragraphs [0058], [0104], [0115], [0133]); performing an automatic background replacement on the first image to generate a second image according to the on- image mark of the first image (based on the segmented body of the images and the skeleton generated from the body outline a background replacement function is performed as seen in figures 16-17 and is called displaying augmented reality AR effects throughout the prior art document; figs 13, 14a, 14c, and 16-17; paragraphs [0139], [0259-0260]), generating training data according to the second image (raw input image data is used to further train a neural network adapted for recognizing bodies or objects in images and using weighting factors to generate training data the user may train various networks to recognize various objects and people; paragraphs [0153], [0161-0163]); and training the image identification model by using the training data (applying the weighted neural networks to train the system to identify specific objects and or people/poses of people in motion; paragraphs [0153], [0161-0163]). HOLZER fails to disclose generating an on-image mark in response to the user operation, wherein the on-image mark reflects a coverage range covering a target object and a part of a background of the first image; wherein a region outside the coverage range of the first image is different from the one of the second image.
HUANG discloses generating an on-image mark in response to the user operation, wherein the on-image mark reflects a coverage range covering a target object and a part of a background of the first image (a computing system for performing image synthesis method by masking a region of the object target image between the image content and the background of the image wherein the background can be replaced using an information synthesis model, by using a content mask adapted to be larger in range than the object it is covering and to allow for background replacement of the background surrounding the mask in order to generate a synthesized image by merging the target image including the content mask having the content to be preserved in the final image with a background image, to obtain a synthesized image; abstract; fig 8; paragraphs [0008], [0028], [0030-0032]; claim 12); wherein a region outside the coverage range of the first image is different from the one of the second image (wherein the region outside of the content mask will have its background replaced by the background image in order to produce a synthesized image with a new background and containing the image features masked by the content mask in the resulting final image; figs 3 and 8; paragraphs [0008], [0028], [0030-0032]; claim 12).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify HOLZER to have wherein a region outside the coverage range of the first image is different from the one of the second image of HUANG reference. The Suggestion/motivation for doing so would have been to provide the ability to preserve the image content including the object and a range/area larger than the object covered by the content mask for synthesis into the final image as suggested by HUANG at paragraphs [0028] and [0032]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HUANG with HOLZER to obtain the invention as specified in claim 1.
As per claim 3, HOLZER in view of HUANG discloses the model training method according to claim 1. Modified HOLZER further discloses wherein the user operation comprises marking a foreground region in the first image (the computing system is adapted to identify the foreground and background portions of an image; paragraphs [0052], [0058]).
As per claim 4, HOLZER discloses the model training method according to claim 1, wherein performing the automatic background replacement on the first image to generate the second image comprises: determining a background region in the first image according to the on-image mark (based on the body outline 1016 the system can determine background and foreground area and using the AR effects function which is stated in paragraph [0259] to be equivalent to background replacement methods; figs 9a, 10, 14c; paragraphs [0034-0035], [0110-0115], [0139], [0225], [0259-0260]). HOLZER fails to disclose and in the automatic background replacement, replacing a default background image in the background region with a candidate background image to generate the second image.
HUANG discloses and in the automatic background replacement, replacing a default background image in the background region with a candidate background image to generate the second image (the computing system using an image synthesis model is adapted to take the masked content of the target image and synthesize it with the background image to produce a synthesized image comprising the masked object and region around the object and the background of the background image; figs 3, 8-9; paragraphs [0028], [0116-0118]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify HOLZER to have in the automatic background replacement, replacing a default background image in the background region with a candidate background image to generate the second image of HUANG reference. The Suggestion/motivation for doing so would have been to provide the ability to preserve the image content including the object and a range/area larger than the object covered by the content mask for synthesis into the final image as suggested by HUANG at paragraphs [0028] and [0032]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HUANG with HOLZER to obtain the invention as specified in claim 4.
As per claim 5, HOLZER in view of HUANG discloses the model training method according to claim 1. Modified HOLZER further discloses further comprising: generating the training data according to the first image if the first image does not have the on-image mark (the computing system is adapted to generate the body outline and body skeleton of the first input image as the image will not currently have a body outline or body skeleton applied to the image, further the background and foreground parts of the image are also identified along with the body outline portion, wherein the outline is acting as an on image generated mark; figs 9a, 10, 14c, 16-17; paragraphs [0034-0035], [0110-0115], [0139], [0152-0154], [0259-0260]).
As per claim 6, HOLZER discloses a model training system (a computing system and corresponding image processing method for training a learning model to identify and replace backgrounds of an image; abstract; paragraphs [0034], [0129]), comprising: a storage circuit configured to store an image identification model (the system comprises a memory component adapted to store image data, related data, and related instructions to the image processing methods being performed; paragraphs [0266], [0270], [0284]); and a processor coupled to the storage circuit (further including a processor component coupled to said memory to execute said program instructions relating to the image processing method; paragraphs [0266], [0270], [0284]), wherein the processor is configured to: obtain a first image (the computing system obtains a first input image via a camera; title; abstract; figs 1-2; paragraphs [0034], [0038], [0044]); receive a user operation performed on the first image (the computing system is adapted to receive a user input via an interface such as a mouse and keyboard or other input devices described used to input information and instructions/commands relating to the image data collected by the on board image sensors/cameras; paragraphs [0058], [0060], [0115], [0133]); perform an automatic background replacement on the first image to generate a second image according to the on-image mark of the first image (based on the segmented body of the images and the skeleton generated from the body outline a background replacement function is performed as seen in figures 16-17 and is called displaying augmented reality AR effects throughout the prior art document, further as seen in the figures applying the background replacement AR effects allows the user to change the background and keep the body outline/skeleton of the user of the image to create a second image with a different background in the image; figs 13, 14a, 14c, and 16-17; paragraphs [0139], [0259-0260]) generate training data according to the second image (raw input image data is used to train a neural network adapted for recognizing bodies or objects in images and using weighting factors to generate training data the user may train various networks to recognize various objects and people; paragraphs [0153], [0161-0163]); and train the image identification model by using the training data (applying the weighted neural networks to train the system to identify specific objects and or people/poses of people in motion; paragraphs [0153], [0161-0163]). HOLZER fails to disclose generate an on-image mark in response to the user operation, wherein the on-image mark reflects a coverage range covering a target object and a part of a background of the first image; wherein a region outside the coverage range of the first image is different from the one of the second image.
HUANG discloses generate an on-image mark in response to the user operation, wherein the on-image mark reflects a coverage range covering a target object and a part of a background of the first image (a computing system for performing image synthesis method by masking a part between the image content and the background can be generated by an information synthesis model, by using a content mask adapted to be larger in range than the object it is covering and to allow for background replacement of the background surrounding the mask in order to generate a synthesized image by merging the target image including the content mask having the content to be preserved in the final image with a background image, to obtain a synthesized image; abstract; fig 8; paragraphs [0008], [0028], [0030-0032]; claim 12); wherein a region outside the coverage range of the first image is different from the one of the second image (wherein the region outside of the content mask will have its background replaced by the background image in order to produce a synthesized image with a new background and containing the image features masked by the content mask in the resulting final image; figs 3 and 8; paragraphs [0008], [0028], [0030-0032]; claim 12).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify HOLZER to have wherein a region outside the coverage range of the first image is different from the one of the second image of HUANG reference. The Suggestion/motivation for doing so would have been to provide the ability to preserve the image content including the object and a range/area larger than the object covered by the content mask for synthesis into the final image as suggested by HUANG at paragraphs [0028] and [0032]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HUANG with HOLZER to obtain the invention as specified in claim 6.
As per claim 8, HOLZER in view of HUANG discloses the model training system according to claim 6. Modified HOLZER further discloses wherein the user operation comprises marking a foreground region in the first image (the computing system is adapted to identify the foreground and background portions of an image; paragraphs [0052], [0058]).
As per claim 9, HOLZER discloses the model training system according to claim 6, wherein the operation of the processor performing the automatic background replacement on the first image to generate the second image comprises: determining a background region in the first image according to the on-image mark (based on the body outline 1016 the system can determine background and foreground area and using the AR effects function which is stated in paragraph [0259] to be equivalent to background replacement methods; figs 9a, 10, 14c; paragraphs [0034-0035], [0110-0115], [0139], [0225], [0259-0260]). HOLZER fails to disclose and in the automatic background replacement, replacing a default background image in the background region with a candidate background image to generate the second image.
HUANG discloses and in the automatic background replacement, replacing a default background image in the background region with a candidate background image to generate the second image ().
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify HOLZER to have replacing a default background image in the background region with a candidate background image to generate the second image of HUANG reference. The Suggestion/motivation for doing so would have been to provide the ability to preserve the image content including the object and a range/area larger than the object covered by the content mask for synthesis into the final image as suggested by HUANG at paragraphs [0028] and [0032]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HUANG with HOLZER to obtain the invention as specified in claim 9.
As per claim 10, HOLZER in view of HUANG discloses the model training system according to claim 6. Modified HOLZER further discloses wherein if the first image does not have the on-image mark, the processor is further configured to generate the training data according to the first image (the computing system is adapted to generate the body outline and body skeleton of the first input image as the image will not currently have a body outline or body skeleton applied to the image, further the background and foreground parts of the image are also identified along with the body outline portion, wherein the outline is acting as an on image generated mark; figs 9a, 10, 14c, 16-17; paragraphs [0034-0035], [0110-0115], [0139], [0152-0154], [0259-0260]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000.
/Devin Dhooge/
USPTO Patent Examiner
Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677